I was wondering if anyone knew what exactly an s3 prefix was and how it interacts with amazon\'s published s3 rate limits:
Amazon S3 automatically sca
This looks like it is obscurely addressed in an amazon release communication
https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/
Performance scales per prefix, so you can use as many prefixes as you need in parallel to achieve the required throughput. There are no limits to the number of prefixes.
This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications. This improvement is now available in all AWS Regions. For more information, visit the Amazon S3 Developer Guide.
S3 prefixes used to be determined by the first 6-8 characters;
This has changed mid-2018 - see announcement https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/
But that is half-truth. Actually prefixes (in old definition) still matter.
S3 is not a traditional “storage” - each directory/filename is a separate object in a key/value object store. And also the data has to be partitioned/ sharded to scale to quadzillion of objects. So yes this new sharding is kinda of “automatic”, but not really if you created a new process that writes to it with crazy parallelism to different subdirectories. Before the S3 learns from the new access pattern, you may run into S3 throttling before it reshards/ repartitions data accordingly.
Learning new access patterns takes time. Repartitioning of the data takes time.
Things did improve in mid-2018 (~10x throughput-wise for a new bucket with no statistics), but it's still not what it could be if data is partitioned properly. Although to be fair, this may not be applied to you if you don't have a ton of data, or pattern how you access data is not hugely parallel (e.g. running a Hadoop/Spark cluster on many Tbs of data in S3 with hundreds+ of tasks accessing same bucket in parallel).
TLDR:
"Old prefixes" still do matter. Write data to root of your bucket, and first-level directory there will determine "prefix" (make it random for example)
"New prefixes" do work, but not initially. It takes time to accommodate to load.
PS. Another approach - you can reach out to your AWS TAM (if you have one) and ask them to pre-partition a new S3 bucket if you expect a ton of data to be flooding it soon.
In the case you query S3 using Athena, EMR/Hive or Redshift Spectrum increasing the number of prefixes could mean adding more partitions (as the partititon id is part of the prefix). If using datetime as (one of) your partititon keys the number of partittions (and prefixes) will automatically grow as new data is added over time and the total max S3 GETs per second grow as well.
The upvoted answer on this was a bit misleading for me. If these are the paths
bucket/folder1/sub1/file
bucket/folder1/sub2/file
bucket/1/file
bucket/2/file
Your prefix for file would actually be
folder1/sub1/
folder1/sub2/
1/file
2/file
https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html Please se docs. I had issues with the leading '/' when trying to list keys with the airflow s3hook.
You're right, the announcement seems to contradict itself. It's just not written properly, but the information is correct. In short:
For reference, here is a response from AWS support to my clarification request:
Hello Oren,
Thank you for contacting AWS Support.
I understand that you read AWS post on S3 request rate performance being increased and you have additional questions regarding this announcement.
Before this upgrade, S3 supported 100 PUT/LIST/DELETE requests per second and 300 GET requests per second. To achieve higher performance, a random hash / prefix schema had to be implemented. Since last year the request rate limits increased to 3,500 PUT/POST/DELETE and 5,500 GET requests per second. This increase is often enough for applications to mitigate 503 SlowDown errors without having to randomize prefixes.
However, if the new limits are not sufficient, prefixes would need to be used. A prefix has no fixed number of characters. It is any string between a bucket name and an object name, for example:
- bucket/folder1/sub1/file
- bucket/folder1/sub2/file
- bucket/1/file
- bucket/2/file
Prefixes of the object 'file' would be:
/folder1/sub1/
,/folder1/sub2/
,/1/
,/2/
. In this example, if you spread reads across all four prefixes evenly, you can achieve 22,000 requests per second.
In order for AWS to handle billions of requests per second, they need to shard up the data so it can optimise throughput. To do this they split the data into partitions based on the first 6 to 8 characters of the object key. Remember S3 is not a hierarchical filesystem, it is only a key-value store, though the key is often used like a file path for organising data, prefix + filename.
Now this is not an issue if you expect less than 100 requests per second, but if you have serious requirements over that then you need to think about naming.
For maximum parallel throughput you should consider how your data is consumed and use the most varying characters at the beginning of your key, or even generate 8 random character for the first 8 characters of the key.
e.g. assuming first 6 characters define the partition:
files/user/bob
would be bad as all the objects would be on one partition files/
.
2018-09-21/files/bob
would be almost as bad if only todays data is being read from partition 2018-0
. But slightly better if the objects are read from past years.
bob/users/files
would be pretty good if different users are likely to be using the data at the same time from partition bob/us
. But not so good if Bob is by far the busiest user.
3B6EA902/files/users/bob
would be best for performance but more challenging to reference, where the first part is a random string, this would be pretty evenly spread.
Depending on your data, you need to think of any one point in time, who is reading what, and make sure that the keys start with enough variation to partition appropriately.
For your example, lets assume the partition is taken from the first 6 characters of the key:
for the key Development/Projects1.xls
the partition key would be Develo
for the key Finance/statement1.pdf
the partition key would be Financ
for the key Private/taxdocument.pdf
the partition key would be Privat
for the key s3-dg.pdf
the partition key would be s3-dg.