问题
i am trying to connect to a remote HDFS instance as
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://hostName:8020");
conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem");
FileSystem fs = FileSystem.get(conf);
RemoteIterator<LocatedFileStatus> ri = fs.listFiles(fs.getHomeDirectory(), false);
while (ri.hasNext()) {
LocatedFileStatus lfs = ri.next();
//log.debug(lfs.getPath().toString());
}
fs.close();
here are my Maven dependencies
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-examples</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.1</version>
</dependency>
and here is the result of hadoop version command on my remote node
hadoop version
Hadoop 2.7.1.2.3.0.0-2557
but i get
Exception in thread "main" java.lang.UnsupportedOperationException: Not implemented by the DistributedFileSystem FileSystem implementation
at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:217)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2624)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2634)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at filecheck.HdfsTest.main(HdfsTest.java:21)
and this is the line that causes the error
FileSystem fs = FileSystem.get(conf);
any idea why this might be happening?
After trying Manjunath's answer
here is what i get
ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:356)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:371)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:364)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2807)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2802)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2668)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at filecheck.HdfsTest.main(HdfsTest.java:27)
15/11/16 09:48:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.IllegalArgumentException: Pathname from hdfs://hostName:8020 is not a valid DFS filename.
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:940)
at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:927)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:872)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:868)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:886)
at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1694)
at org.apache.hadoop.fs.FileSystem$6.<init>(FileSystem.java:1787)
at org.apache.hadoop.fs.FileSystem.listFiles(FileSystem.java:1783)
at filecheck.HdfsTest.main(HdfsTest.java:29)
回答1:
My HDFS client code code uses hadoop-hdfs also requires:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.1</version>
</dependency>
and I use the Hortonworks repository:
<repository>
<id>repo.hortonworks.com</id>
<name>Hortonworks HDP Maven Repository</name>
<url>http://repo.hortonworks.com/content/repositories/releases/</url>
</repository>
I think you're picking up the wrong version of FileSystem.
回答2:
The exception is occurring in FileSystem.java
, in getScheme()
method, which simply throws UnsupportedOperationException
exception.
public String getScheme() {
throw new UnsupportedOperationException("Not implemented by the " + getClass().getSimpleName() + " FileSystem implementation");
}
It is calling getScheme()
method of FileSystem
class, instead of calling getScheme()
method from DistributedFileSystem
class.
The getScheme()
method of DistributedFileSystem
class returns:
@Override
public String getScheme() {
return HdfsConstants.HDFS_URI_SCHEME;
}
So, to overcome this problem, you need to change the "FileSystem.get(conf)" statement, as shown below:
DistributedFileSystem fs = (DistributedFileSystem) FileSystem.get(conf);
EDIT:
I tried out the program and it worked perfectly fine for me. In fact, it works with and without casting. Following is my code (only difference is, I am setting recursive listing to "true"):
package com.hadooptests;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocatedFileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.RemoteIterator;
import org.apache.hadoop.hdfs.DistributedFileSystem;
import java.io.IOException;
public class HDFSConnect {
public static void main(String[] args)
{
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://machine:8020");
conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem");
DistributedFileSystem fs = null;
try {
fs = (DistributedFileSystem) FileSystem.get(conf);
RemoteIterator<LocatedFileStatus> ri;
ri = fs.listFiles(new Path("hdfs://machine:8020/"), true);
while (ri.hasNext()) {
LocatedFileStatus lfs = ri.next();
System.out.println(lfs.getPath().toString());
}
fs.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
My maven:
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>2.6</version>
<configuration>
<archive>
<manifest>
<mainClass>com.hadooptests.HDFSConnect
</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
</plugins>
</build>
I ran the program as:
java -cp "%CLASSPATH%;hadooptests-1.0-SNAPSHOT.jar" com.hadooptests.HDFSConnect
where CLASSPATH is set to:
.;%HADOOP_HOME%\etc\hadoop\;%HADOOP_HOME%\share\hadoop\common\*;%HADOOP_HOME%\share\hadoop\common\lib\*;%HADOOP_HOME%\share\hadoop\hdfs\*;%HADOOP_HOME%\share\hadoop\hdfs\lib\*;%HADOOP_HOME%\share\hadoop\mapreduce\*;%HADOOP_HOME%\share\hadoop\mapreduce\lib\*;%HADOOP_HOME%\share\hadoop\tools\*;%HADOOP_HOME%\share\hadoop\tools\lib\*;%HADOOP_HOME%\share\hadoop\yarn\*;%HADOOP_HOME%\share\hadoop\yarn\lib\*
Some of the output, I got:
hdfs://machine:8020/app-logs/machine/logs/application_1439815019232_0001/machine.corp.com_45454
hdfs://machine:8020/app-logs/machine/logs/application_1439815019232_0002/machine.corp.com_45454
hdfs://machine:8020/app-logs/machine/logs/application_1439817471006_0002/machine.corp.com_45454
hdfs://machine:8020/app-logs/machine/logs/application_1439817471006_0003/machine.corp.com_45454
EDIT 2:
My environment:
Hadoop 2.7.1 on Windows.
I installed HDP 2.3.0, which deploys Hadoop 2.7.1
来源:https://stackoverflow.com/questions/33681940/cannot-connect-to-remote-hdfs-from-windows