Transfer file out from HDFS

不问归期 提交于 2019-12-02 20:34:37

So you probably have a file with a bunch of parts as the output from your hadoop program.

part-r-00000
part-r-00001
part-r-00002
part-r-00003
part-r-00004

So lets do one part at a time?

for i in `seq 0 4`;
do
hadoop fs -copyToLocal output/part-r-0000$i ./
scp ./part-r-0000$i you@somewhere:/home/you/
rm ./part-r-0000$i
done

You may have to lookup the password modifier for scp

This is the simplest way to do it:

ssh <YOUR_HADOOP_GATEWAY> "hdfs dfs -cat <src_in_HDFS> " > <local_dst>

It works for binary files too.

I think simplest solution would be network mount or SSHFS to simulate local file server directory locally.
You also can mount FTP as a local directory: http://www.linuxnix.com/2011/03/mount-ftp-server-linux.html

You could make use of webHDFS REST API to do that. Do a curl from the machine where you want to download the files.

curl -i -L "http://namenode:50075/webhdfs/v1/path_of_the_file?op=OPEN" -o ~/destination

Another approach could be to use the DataNode API through wget to do this :

wget http://$datanode:50075/streamFile/path_of_the_file

But, the most convenient way, IMHO, would be to use the NameNOde webUI. Since this machine is part of the network, you could just point your web browser to NameNode_Machine:50070. After that browse through the HDFS, open the file you want to download and click Download this file.

I was trying to do this too (I was using Kerberos security). This helped me after small update: https://hadoop.apache.org/docs/r1.0.4/webhdfs.html#OPEN

Run directly curl -L -i --negotiate "http://<HOST>:<PORT>/webhdfs/v1/<PATH>?op=OPEN" didn't worked for me, I'll explain why.

This command will do two steps:

  1. find a file you want to download and create a temporary link - return 307 Temporary Redirect

  2. from this link he will download a data - return HTTP 200 OK.

The switcher -L is saying that he take a file and continue with sawing directly. If you add to curl command -v, it'll log to output; if so, you'll see described two steps in command line, as I said. BUT - because due to older version curl (which I cannot udpate) it won't work.

SOLUTION FOR THIS (in Shell):

LOCATION=`curl -i --negotiate -u : "${FILE_PATH_FOR_DOWNLOAD}?op=OPEN" | /usr/bin/perl -n -e '/^Location: (.*)$/ && print "$1\n"'`

This will get temporary link and save it to $LOCATION variable.

RESULT=`curl -v -L --negotiate -u : "${LOCATION}" -o ${LOCAL_FILE_PATH_FOR_DOWNLOAD}`

And this will save it to your local file, if you add -o <file-path>.

I hope it helped.

J.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!