s3fs on Amazon EMR: Will it scale for approx 100million small files?

荒凉一梦 提交于 2019-12-05 10:11:43

I would recommend against using s3fs to copy large amounts of small files.

I've had tried on a few occasions to move large amounts of small files from HDFS and the s3fs daemon kept on crashing. I was using both cp and rsync. This is gets even more aggravating if you are doing incremental updates. One alternative is to use the use_cache option and see how it behaves.

We have resorted to using s3cmd and iterating through each one of the files say using the Unix find command. Something like this:

find <hdfs fuse mounted dir> -type f -exec s3cmd put {} s3://bucketname \;

You can also try the s3cmd sync with something like this:

s3cmd sync /<local-dir>/ s3://bucketname
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!