问题
I'm trying to copy a directory that contains 1,048,578 files into hdfs
file system but, got below error:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2367)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
at java.lang.StringBuffer.append(StringBuffer.java:237)
at java.net.URI.appendSchemeSpecificPart(URI.java:1892)
at java.net.URI.toString(URI.java:1922)
at java.net.URI.<init>(URI.java:749)
at org.apache.hadoop.fs.shell.PathData.stringToUri(PathData.java:565)
at org.apache.hadoop.fs.shell.PathData.<init>(PathData.java:151)
at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:273)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
回答1:
Issue was basically with Hadoop client. This is fixed by increasing "GCOverheadLimit" to 4GB. Following command solved my problem.
export HADOOP_CLIENT_OPTS="-XX:-UseGCOverheadLimit -Xmx4096m"
回答2:
Try giving your put (or copy from local) command more heap space. Alternatively, do a less aggressive put operation.
I.e. copy in batches of half or 1/4th or 1/5 .... of the total data. All this copying is done from the local machine with a default java command, you are simply overloading it.
来源:https://stackoverflow.com/questions/35405690/out-of-memory-issue-for-hadoop-copyfromlocal