问题
I've setup a single node Hadoop 2.6.0 cluster on my Windows 8.1 using this tutorial - https://wiki.apache.org/hadoop/Hadoop2OnWindows.
All daemons are up and running. I'm able to access hdfs using hadoop fs -ls /
but I've not loaded anything, so there is nothing to show up as of now.
But when I run a simple map reduce program, I get the below erorr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native Method)
at org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(NativeCrc32.java:86)
at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(DataChecksum.java:430)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:202)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:163)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:144)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.close(ChecksumFileSystem.java:400)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:80)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:603)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at wordcount.Wordcount.main(Wordcount.java:62)
Error from hadoop fs -put
command :
Any advise would be of great help.
回答1:
org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray
is part of hadoop.dll:
JNIEXPORT void JNICALL Java_org_apache_hadoop_util_NativeCrc32_nativeComputeChunkedSumsByteArray
(JNIEnv *env, jclass clazz,
jint bytes_per_checksum, jint j_crc_type,
jarray j_sums, jint sums_offset,
jarray j_data, jint data_offset, jint data_len,
jstring j_filename, jlong base_pos, jboolean verify)
{
...
Unsatisfied link would indicate that you did not deploy Hadoop.dll in %HADOOP_HOME%\bin or the process loaded a wrong dll from somewhere else. Make sure the correct dll is placed in %HADOOP_HOME%\bin, and make sure this is the one loaded (use process explorer)
You should also see in the log the NativeCodeLoader output:
private static boolean nativeCodeLoaded = false;
static {
// Try to load native hadoop library and set fallback flag appropriately
if(LOG.isDebugEnabled()) {
LOG.debug("Trying to load the custom-built native-hadoop library...");
}
try {
System.loadLibrary("hadoop");
LOG.debug("Loaded the native-hadoop library");
nativeCodeLoaded = true;
} catch (Throwable t) {
// Ignore failure to load
if(LOG.isDebugEnabled()) {
LOG.debug("Failed to load native-hadoop with error: " + t);
LOG.debug("java.library.path=" +
System.getProperty("java.library.path"));
}
}
if (!nativeCodeLoaded) {
LOG.warn("Unable to load native-hadoop library for your platform... " +
"using builtin-java classes where applicable");
}
Enable DEBUG level for this component and you should see "Loaded the native-hadoop library"
(since your code acts as if the hadoop.dll was loaded). The most likely problem is that the wrong one is loaded because is found first in the PATH.
回答2:
hadoop.dll should also be added to C:/Windows/System32. I got it working with the help of this link - http://cnblogs.com/marost/p/4372778.html (Use an online translator to translate it to your native language)
来源:https://stackoverflow.com/questions/31429381/error-while-running-map-reduce-on-hadoop-2-6-0-on-windows