I want to run a map reduce example:
package my.test;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.En
`hadoop classpath`
and
`hbase classpath`
will give cluster classpath export this to HADOOP_CLASSPATH. (is standard way to utilize cluster's local environment).
-libjars
option of the mapreduce if was not finding the jar which you are looking for.I'm using the following script to add job's dependencies in lib folder and hbase's dependencies to job's classpath:
cp=$(find `pwd` -name '*.jar' | tr '\n', ',')
cp=$cp$(hbase mapredcp 2>&1 | tail -1 | tr ':' ',')
export HADOOP_CLASSPATH=`echo ${cp} | sed s/,/:/g`
hadoop jar `pwd`/bin/mr.jar \
--libjars ${cp} \
$@
You have two easy options:
1) Build a fat jar, where your mt.jar
file includes the hbase-0.94.0.jar
(can be done with mvn package -Dfatjar
)
2) Use the GenericOptionsParser
(I think you are trying to by implementing Tool
) and then specify the -libjars parameter on the command line.
I struggled with the same. My this post has it working - https://my-bigdata-blog.blogspot.in/2017/08/Hbase-Programming-Java-Netbeans-Maven.html You need below line in code along with setting Hadoop_classpath. TableMapReduceUtil.addDependencyJars(job);