HDP 2.2@Linux/CentOS@OracleVM (Hortonworks) fails on remote submission from Eclipse@Windows

两盒软妹~` 提交于 2019-12-13 15:34:15

问题


I have HDP 2.2 running on CentOS within OracleVM on my local machine (Windows 7) in Pseudo Distro mode. Wanted to test it for remote submission and hence created a WordCount example in Eclipse running outside OVM and submitted as follows (example I chose is from somewhere else on the net itself)

Path inputPath = new Path("/hdfsinput");
Path outputDir = new Path("/hdfsoutput");

// Create configuration
Configuration conf = new Configuration(true);

// create inputPath on HDFS if needed

FileSystem hdfs = FileSystem.get(conf);
if (!hdfs.exists(inputPath))
        hdfs.mkdirs(inputPath);


// Create job
Job job = new Job(conf, "WordCount");
job.setJarByClass(WordCountMapper.class);

// Setup MapReduce
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);
job.setNumReduceTasks(1);

// Specify key / value
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

// Input
FileInputFormat.addInputPath(job, inputPath);
job.setInputFormatClass(TextInputFormat.class);

// Output
FileOutputFormat.setOutputPath(job, outputDir);
job.setOutputFormatClass(TextOutputFormat.class);

// Delete output if exists

if (hdfs.exists(outputDir))
hdfs.delete(outputDir, true);
hdfs.close();

// Execute job
int code = job.waitForCompletion(true) ? 0 : 1;
System.out.println("output code => "+code);
System.exit(code);

Got following exception returned within Eclipse (as well as Namenode logs (sandbox.hortonworks.com:50070/logs/hadoop-hdfs-namenode-sandbox.hortonworks.com.log))

Namenode log:

2015-12-07 16:21:14,631 INFO  blockmanagement.BlockManager BlockManager.java:setReplication(2710)) - Increasing replication from 1 to 10 for /user/root/.staging/job_1449505005810_0001/job.split
2015-12-07 16:21:14,690 INFO  hdfs.StateChange FSNamesystem.java:saveAllocatedBlock(3663)) - BLOCK* allocateBlock: /user/root/.staging/job_1449505005810_0001/job.split. BP-1487918654-10.0.2.15-1418756667447 blk_1073742153_1339{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b183a7df-9498-4b2c-87f5-4bfb2cf40ca3:NORMAL:10.0.2.15:50010|RBW]]}
2015-12-07 16:21:35,768 WARN  blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseTarget(383)) - Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2015-12-07 16:21:35,769 WARN  blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseTarget(383)) - Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2015-12-07 16:21:35,770 WARN  protocol.BlockStoragePolicy (BlockStoragePolicy.java:chooseStorageTypes(160)) - Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2015-12-07 16:21:35,770 WARN  blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseTarget(383)) - Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2015-12-07 16:21:35,771 INFO  ipc.Server (Server.java:run(2060)) - IPC Server handler 91 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.0.2.2:54842 Call#25 Retry#0
java.io.IOException: File /user/root/.staging/job_1449505005810_0001/job.split could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3203)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

Eclipse Console....

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/.staging/job_1449505005810_0002/job.split could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3203)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)

Notice "WARN" statements in Namenode log. Based on those I enabled DEBUG mode on "org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy" and rerun the Job to additionally get following exception in the namenode log before the original exception.

Namenode log (upon job resubmission):

2015-12-07 16:22:17,137 INFO  blockmanagement.BlockManager (BlockManager.java:setReplication(2710)) - Increasing replication from 1 to 10 for /user/root/.staging/job_1449505005810_0002/job.split
2015-12-07 16:22:17,175 INFO  hdfs.StateChange (FSNamesystem.java:saveAllocatedBlock(3663)) - BLOCK* allocateBlock: /user/root/.staging/job_1449505005810_0002/job.split. BP-1487918654-10.0.2.15-1418756667447 blk_1073742154_1340{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b183a7df-9498-4b2c-87f5-4bfb2cf40ca3:NORMAL:10.0.2.15:50010|RBW]]}
2015-12-07 16:22:38,254 DEBUG blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseLocalRack(530)) - Failed to choose from local rack (location = /default-rack); the second replica is not found, retry choosing ramdomly
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: 
at   org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:691)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:606)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:512)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:472)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:339)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:214)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:111)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:126)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1545)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3203)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

I have tried all Stack Overflow solutions for resolving exception "could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation" but could not resolve it.

  1. Tried formatting namenode

  2. Tried to place all centOS configuration files in "/usr/hdp/2.2.0.0-2041/hadoop/conf" in windows local folder and included it in Eclipse classpath

  3. Tried opening all ports to make it accessible from Eclipse (including 50010).

  4. Tried placing slaves and master files in /etc/hadoop
  5. And many others...

Upon examining code from BlockPlacementPolicyDefault (http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hadoop/hadoop-hdfs/2.6.0/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java), I feel the error is due to faulty logic at 715 which always returns a 0 because the localnode is already added to excludednode Set.

703  int addIfIsGoodTarget(DatanodeStorageInfo storage,
704      Set<Node> excludedNodes,
705      long blockSize,
706      int maxNodesPerRack,
707      boolean considerLoad,
708      List<DatanodeStorageInfo> results,                           
709      boolean avoidStaleNodes,
710      StorageType storageType) {
711    if (isGoodTarget(storage, blockSize, maxNodesPerRack, considerLoad,
712        results, avoidStaleNodes, storageType)) {
713      results.add(storage);
714      // add node and related nodes to excludedNode
715      return addToExcludedNodes(storage.getDatanodeDescriptor(), excludedNodes);
716    } else { 
717      return -1;
718    }
719  }

and thus the following lines will always be executed to throw exception (one needs to see entire code link above to make sense).

681    if (numOfReplicas>0) {
682      String detail = enableDebugLogging;
683      if (LOG.isDebugEnabled()) {
684        if (badTarget && builder != null) {
685          detail = builder.toString();
686          builder.setLength(0); 
687        } else {
688          detail = ""; 
689        }
690      }
691      throw new NotEnoughReplicasException(detail);
692    }

But, may be I am just thinking too much and it is really a configuration issue with the parameters passed to following method (some of these parameters must be coming from HDFS configuration files)

652            final int newExcludedNodes = addIfIsGoodTarget(storages[i],
653                excludedNodes, blocksize, maxNodesPerRack, considerLoad, results,
654                avoidStaleNodes, type);

来源:https://stackoverflow.com/questions/34140531/hdp-2-2linux-centosoraclevm-hortonworks-fails-on-remote-submission-from-ecli

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!