Installing the Oracle JDK on Fedora

Had to learn this one from this site.

In my use case I only want to compile and run Hadoop application so I have not completed all the steps for the browser setup.

Short version:

  1. Download the JDK of your choice; I picked 1.7.0_51
  2. sudo rpm -Uvh /tmp/jdk-7u51-linux-x64.rpm
  3. sudo alternatives –install /usr/bin/java java /usr/java/latest/bin/java 200000
  4. sudo alternatives –install /usr/bin/javac javac /usr/java/latest/bin/javac 200000
  5. sudo alternatives –install /usr/bin/jar jar /usr/java/latest/bin/jar 200000
  6. sudo alternatives –config java

The last step was to activate the new installation I added. I selected option 2.

As simple as that and running java -version shows me the Oracle JVM version.

Advertisements

Starting with Hadoop – 2

I created a page for my Hadoop notes and will keep those up to date with what I am experimenting.

I will post short articles on what I have done and where I am facing challenges.

I think that using the OpenJDK is a mistake so I am testing with the Oracle JVM to see if it fixes some of the issues I am facing.

I have also upgraded to Fedora 20 which should not change much in how Hadoop works. The only I have noticed is an error because the temp directory is gone. I will have to investigate why that is preventing the namenode from starting. Might have to move the temp outside of /tmp to avoid this issue.

Starting with Hadoop

Trying to find simple and authorative documentation for Hadoop is harder than I expected. With the many versions out there it is easy to find documentation for the wrong version and not being able to find what really needs to be done.

Versions:

  • Hadoop 2.2.0
  • OpenJDK 1.7.0_51
  • Fedora 19

I have set my environment variables in my .bash_profile:

export JAVA_HOME=/usr/lib/jvm/java
export HADOOP_HOME=/opt/hadoop-2.2.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

Configuration file:
$HADOOP_HOME/etc/hadoop/core-site.xml

<!--?xml version="1.0" encoding="UTF-8"?-->
<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/tmp/hadoop-${user.name}</value>
</property>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>
<property>
  <name>mapred.job.tracker</name>
  <value>hdfs://localhost:54311</value>
</property>
<property> 
  <name>dfs.replication</name>
  <value>8
</property>
<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx512m</value>
</property>
</configuration>

First few Hadoop commands:

hadoop namenode -format
hadoop namenode

Things to resolve:

$HADOOP_HOME/sbin/start-all.sh – does not work at all; throws a lot of errors

A Not acceptable Rest with Spring

Trying to come up with a clever title for a blog post is not easy.

This problem caused me headaches for 2 days and as much as I have resolved it I have no idea why and how it happened.

I have done REST application with Spring many times in the past and it easy. At one point in the process for this one I was creating REST application with STS just to confirm that it was a no brainer, which helped and puzzled me.

The simplest class that worked for me was something like this:

@Controller
@RequestMapping(value="/rest")
public class Rest {
    @RequestMapping(value="/{something}", method=RequestMethod.GET, produces=MediaType.APPLICATION_JSON_VALUE)
    @ResponseBody
    public String[] firstTry(@PathVariable String something){
        return new String[]{something, "testValue"};
    }
}

I could quickly package it with Maven and run it on Wildfly and it would work as expected.

When I tried to do the same in IntelliJ I was getting a non working application that returned a 406, Not Acceptable, return code to all REST calls.

I compared the web.xml, the application context xml files, the pom files, everything I could thing of and the STS app would work with nothing special but the IntelliJ app would not.

Most articles I was finding were talking about bugs in some version of Spring and to make sure you had the element in your application context xml. I tried many version and adding the annotation driven did not fix the issue.

I finally found this article that explained that you needed to make sure you specified the jackson-databind library in your pom.xml otherwise you would get 406 answers. Once I added the jackson libraries to my project from IntelliJ I was able to get things to work as expected.

No Jackson libraries are specified in my STS generated project and they work so I am still puzzled as to why this was needed in the IntelliJ project.

JDK7 and JPA

We have to migrate some projects from Java 1.6 to 1.7 and I was looking at what would break. Something I noticed is that a Spring 3.2 project using Java 1.6 does not need to specify the javax.persistence dependency but a similar project using Java 1.7 does need to. I am wondering how many other surprises I will find with these migrations.

Upgrading Groovy and Grails

I have started to learn what Groovy and Grails can do for web application development and I already faced a fun problem.

After I installed GGTS (from Spring.io), I started a simple application and then wanted to upgrade the Groovy and Grails version to the latest.

Not as straight forward as I would have taught but I got everything working again.

First thing learned is that GVM is a useful tool to use to install Groovy and Grails and manage multiple versions. Doing it manually does not make sense.

I also learned from a stackoverflow article that you need to erase the .metadata directory in the GGTS workspace after upgrading. This will force you to re-import your projects but that is the only way I have found to recover the projects so it is a small inconvenience.

Back to learning!

Learning Groovy and GRails – 1 of X

I want to learn Groovy and Grails because I want to deliver applications faster and with better quality.

I am unsure if this is the right strategy but it is going to be a fun experiment. Learning is always fun.

I will start with a few videos and go from there:

I will also have to look at the documentation on the spring site to figure out all the static variables and built-in methods that can be used.