Job Token file not found when running Hadoop wordcount example

牧云@^-^@ 提交于 2019-12-02 17:17:55

问题


I just installed Hadoop successfully on a small cluster. Now I'm trying to run the wordcount example but I'm getting this error:

****hdfs://localhost:54310/user/myname/test11
12/04/24 13:26:45 INFO input.FileInputFormat: Total input paths to process : 1
12/04/24 13:26:45 INFO mapred.JobClient: Running job: job_201204241257_0003
12/04/24 13:26:46 INFO mapred.JobClient:  map 0% reduce 0%
12/04/24 13:26:50 INFO mapred.JobClient: Task Id : attempt_201204241257_0003_m_000002_0, Status : FAILED
Error initializing attempt_201204241257_0003_m_000002_0:
java.io.IOException: Exception reading file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:135)
    at org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(TokenCache.java:165)
    at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1179)
    at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1116)
    at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2404)
    at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.FileNotFoundException: File file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken does not exist.
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125)
    at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:129)
    ... 5 more

Any help?


回答1:


I just worked through this same error--setting the permissions recursively on my Hadoop directory didn't help. Following Mohyt's recommendation here, I modified core-site.xml (in the hadoop/conf/ directory) to remove the place where I specified the temp directory (hadoop.tmp.dir in the XML). After allowing Hadoop to create its own temp directory, I'm running error-free.




回答2:


It is better to create your own temp directory.

<configuration>
 <property>
 <name>hadoop.tmp.dir</name>
 <value>/home/unmesha/mytmpfolder/tmp</value>
 <description>A base for other temporary directories.</description>
 </property>
.....

And give permission

unmesha@unmesha-virtual-machine:~$chmod 750 /mytmpfolder/tmp

check this for core-site.xml configuration



来源:https://stackoverflow.com/questions/10303169/job-token-file-not-found-when-running-hadoop-wordcount-example

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!