Differences between Amazon S3 and S3n in Hadoop

后端 未结 3 1808
隐瞒了意图╮
隐瞒了意图╮ 2020-12-12 12:48

When I connected my Hadoop cluster to Amazon storage and downloaded files to HDFS, I found s3:// did not work. When looking for some help on the Internet I foun

相关标签:
3条回答
  • 2020-12-12 13:04

    The two filesystems for using Amazon S3 are documented in the respective Hadoop wiki page addressing Amazon S3:

    • S3 Native FileSystem (URI scheme: s3n)
      A native filesystem for reading and writing regular files on S3. The advantage of this filesystem is that you can access files on S3 that were written with other tools. Conversely, other tools can access files written using Hadoop. The disadvantage is the 5GB limit on file size imposed by S3. For this reason it is not suitable as a replacement for HDFS (which has support for very large files).

    • S3 Block FileSystem (URI scheme: s3)
      A block-based filesystem backed by S3. Files are stored as blocks, just like they are in HDFS. This permits efficient implementation of renames. This filesystem requires you to dedicate a bucket for the filesystem - you should not use an existing bucket containing files, or write other files to the same bucket. The files stored by this filesystem can be larger than 5GB, but they are not interoperable with other S3 tools.

    There are two ways that S3 can be used with Hadoop's Map/Reduce, either as a replacement for HDFS using the S3 block filesystem (i.e. using it as a reliable distributed filesystem with support for very large files) or as a convenient repository for data input to and output from MapReduce, using either S3 filesystem. In the second case HDFS is still used for the Map/Reduce phase. [...]

    [emphasis mine]

    So the difference is mainly related to how the 5GB limit is handled (which is the largest object that can be uploaded in a single PUT, even though objects can range in size from 1 byte to 5 terabytes, see How much data can I store?): while using the S3 Block FileSystem (URI scheme: s3) allows to remedy the 5GB limit and store files up to 5TB, it replaces HDFS in turn.

    0 讨论(0)
  • 2020-12-12 13:14

    Here is an explanation: https://notes.mindprince.in/2014/08/01/difference-between-s3-block-and-s3-native-filesystem-on-hadoop.html

    The first S3-backed Hadoop filesystem was introduced in Hadoop 0.10.0 (HADOOP-574). It was called the S3 block fileystem and it was assigned the URI scheme s3://. In this implementation, files are stored as blocks, just like they are in HDFS. The files stored by this filesystem are not interoperable with other S3 tools - what this means is that if you go to the AWS console and try to look for files written by this filesystem, you won't find them - instead you would find files named something like block_-1212312341234512345 etc.

    To overcome these limitations, another S3-backed filesystem was introduced in Hadoop 0.18.0 (HADOOP-930). It was called the S3 native filesystem and it was assigned the URI scheme s3n://. This filesystem lets you access files on S3 that were written with other tools... When this filesystem was introduced, S3 had a filesize limit of 5GB and hence this filesystem could only operate with files less than 5GB. In late 2010, Amazon... raised the file size limit from 5GB to 5TB...

    Using the S3 block file system is no longer recommended. Various Hadoop-as-a-service providers like Qubole and Amazon EMR go as far as mapping both the s3:// and the s3n:// URIs to the S3 native filesystem to ensure this.

    So always use the native file system. There is no more 5Gb limit. Sometimes you may have to type s3:// instead of s3n://, but just make sure that any files you create are visible in the bucket explorer in the browser.

    Also see http://docs.aws.amazon.com/ElasticMapReduce/latest/ManagementGuide/emr-plan-file-systems.html.

    Previously, Amazon EMR used the S3 Native FileSystem with the URI scheme, s3n. While this still works, we recommend that you use the s3 URI scheme for the best performance, security, and reliability.

    It also says you can use s3bfs:// to access the old block file system, previously known as s3://.

    0 讨论(0)
  • 2020-12-12 13:17

    I think your main problem was related with having S3 and S3n as two separate connection points for Hadoop. s3n:// means "A regular file, readable from the outside world, at this S3 url". s3:// refers to an HDFS file system mapped into an S3 bucket which is sitting on AWS storage cluster. So when you were using a file from Amazon storage bucket you must be using S3N and that's why your problem is resolved. The information added by @Steffen is also great!!

    0 讨论(0)
提交回复
热议问题