问题
We have hosted our wordpress site on aws ec2 with autoscaling and EFS.But all of a sudden the PermittedThroughput became near Zero bytes and BurstCreditBalance was becoming less day by day(from 2TB to few Mbs!). EFS size was only around 2GB!. We are facing this issue second time. I would like to know is there anyone who has similiar experience or any suggestion on this situation.Planning to move from EFS to NFS or glusterfs on comming days.
回答1:
Throughput on Amazon EFS scales as a file system grows.
...
The bursting capability (both in terms of length of time and burst rate) of a file system is directly related to its size. Larger file systems can burst at larger rates for longer periods of time. Therefore, if your application needs to burst more (that is, if you find that your file system is running out of burst credits), you should increase the size of your file system.
Note
There’s no provisioning with Amazon EFS, so to make your file system larger you need to add more data to it.
http://docs.aws.amazon.com/efs/latest/ug/performance.html
You mentioned that your filesystem is only storing 2 GiB of data. That's the problem: it's counterintuitive at first glance, but EFS actually gets faster as it gets larger... and the opposite is also true. Small filesystems accumulate burst credits only at the rate of 50 KiB/sec per second per GiB of data stored.
So, for a 2 GiB filesystem, you're going to deplete your credits by transferring a very small amount of data daily:
60 sec/minute ×
60 min/hour ×
24 hr/day ×
0.05 MiB/s per GiB stored ×
2 GiB stored = 8,640 MiB/day
So about 8.6 GiB per day is all the data transfer this filesystem can sustain.
This seems odd until you remember that you're only paying $0.60 per month.
You can boost the performance linearly by simply storing more data. The filesystem size that is used for the calculation is updated once per hour, so if you go this route, within a couple of hours you should see an uptick.
The reason it's worked well until now is that each new filesystem comes with an initial credit balance equivalent to 2.1 TiB. This is primarily intended to allow the filesystem to be fast as you're initially loading data onto it, but in a low total storage environment such as the one you describe, it will last for days or weeks and then suddenly (apparently) you finally see the system settle down to its correct baseline behavior.
Essentially, you are paying for the settings of two interconnected parameters -- total storage capacity and baseline throughput -- neither of which is something you configure. If you want more storage, just store more files... and if you want more throughput, just... store more files.
回答2:
A little late to the party here.
TL;DR
To increase your aggregate baseline throughput, generate dummy data to increase the size of your file system. This will allow for better baseline and burst performance of your file system.
There are two considerations:
- Cost per GB varies based on region, but price is around $0.30 - $0.36 per GB (as of 2018)
- As file system size goes up, other metrics like burst aggregate throughput, maximum burst duration, and % of time file system can burst (per day) also go up. OPINION: I like to get file systems around 256+ GB.
Performance Metrics:
- Baseline Aggregate Throughput
- Burst Aggregate Throughput
- Maximum Burst Duration
- % of Time File System Can Burst (per day)
Checkout the performance documentation to get more on how the increases work for each metric.
On non-windows server, you can use the following script to generate dummy data to increase your file system size:
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt $1 ]; do
# Use DD to generate 1MB of data 1024 times from /dev/zero and add the newly created file to $2/N.txt
dd if=/dev/zero of=$2/$COUNTER.txt bs=1048576 count=1024
echo "Added file ${COUNTER}.txt to ${2}/"
((COUNTER++))
done
# Save this file as create.sh
# Be sure to run: sudo chmod +x create.sh
If you intend on running this script from an EC2 instance with the EFS file system mounted on it, my recommendation is to use an EC2 instance with high network performance. This will help reduce the time needed to generate files for those that are on a time crunch.
Call the script using: create.sh 256 "/mnt/efs-directory/dummy"
NOTE: Running the above command means you will generate 256 files at 1GB a piece. If you wish to have a smaller or larger amount of data, just change 256 to be the size of your file system that you want.
Some other things you can do are:
- If using Elastic Beanstalk, generate a CloudFront Distribution for the load balancer
- Remove unnecessary plugins from WordPress
- Install caching (OPCache) for either Apache or Nginx
There is a dashboard available for EFS within CloudWatch should you choose to hook it up. It has all the metrics you should need to understand performance of your file system.
来源:https://stackoverflow.com/questions/41673284/degrading-performance-of-aws-efs