putting a remote file into hadoop without copying it to local disk

心不动则不痛 提交于 2019-11-26 18:59:18

问题


I am writing a shell script to put data into hadoop as soon as they are generated. I can ssh to my master node, copy the files to a folder over there and then put them into hadoop. I am looking for a shell command to get rid of copying the file to the local disk on master node. to better explain what I need, here below you can find what I have so far:

1) copy the file to the master node's local disk:

scp test.txt username@masternode:/folderName/

I have already setup SSH connection using keys. So no password is needed to do this.

2) I can use ssh to remotely execute the hadoop put command:

ssh username@masternode "hadoop dfs -put /folderName/test.txt hadoopFolderName/"

what I am looking for is how to pipe/combine these two steps into one and skip the local copy of the file on masterNode's local disk.

thanks

In other words, I want to pipe several command in a way that I can


回答1:


Try this (untested):

cat test.txt | ssh username@masternode "hadoop dfs -put - hadoopFoldername/"

I've used similar tricks to copy directories around:

tar cf - . | ssh remote "(cd /destination && tar xvf -)"

This sends the output of local-tar into the input of remote-tar.




回答2:


The node where you have generated the data on, is this able to reach each of your cluster nodes (the name node and all the datanodes).

If you do have data connectivity then you can just execute the hadoop fs -put command from the machine where the data is generated (assuming you have the hadoop binaries installed there too):

#> hadoop fs -fs masternode:8020 -put test.bin hadoopFolderName/



回答3:


Hadoop provides a couple of REST interfaces. Check Hoop and WebHDFS. You should be able to copy the file without copying the file to the master using them from non-Hadoop environments.




回答4:


(untested)

Since the node where you create your data has access to internet, then perhaps you could install hadoop client node software, then add it to the cluster - after normal hadoop fs -put, then disconnect and remove your temporary node - the hadoop system should then automatically make replication of your files blocks inside your hadoop cluster



来源:https://stackoverflow.com/questions/11270509/putting-a-remote-file-into-hadoop-without-copying-it-to-local-disk

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!