Best file system for serving 1GB files using nginx, under moderate write, read performance-wise?

后端 未结 3 775
悲&欢浪女
悲&欢浪女 2021-02-01 10:36

I\'m going to build large files server, and need stack-overflow community advice for file system choice (linux).

File server is going to serve 1-2GB sized static files

3条回答
  •  时光说笑
    2021-02-01 11:14

    I achieved 80MB/s of "random read" performance per "real" disk (spindle). Here are my findings.

    So, first decide how much traffic you need to push down to users and how much storage you need per server.

    You may skip the disk setup advice given below since you already have a RAID5 setup.

    Lets take an example of a dedicated 1Gbps bandwidth server with 3 * 2TB disks. Keep first disk dedicated to OS and tmp. For other 2 disks you may create a software raid (For me, it worked better than on-board hardware raid). Else, you need to divide your files equally on independent disks. Idea is to keep both disk share read/write load equally. Software raid-0 is best option.

    Nginx Conf There are two ways to achieve high level of performance using nginx.

    1. use directio

      aio on;
      directio 512; output_buffers 1 8m;

      "This option will require you to have good amount of ram" Around 12-16GB of ram is needed.

    2. userland io

      output_buffers 1 2m;

      "make sure you have set readahead to 4-6MB for software raid mount" blockdev --setra 4096 /dev/md0 (or independent disk mount)

      This setting will optimally use system file cache, and require much less ram. Around 8GB of ram is needed.

    Common Notes:

    • keep "sendfile off;"

    you may also like to use bandwidth throttle to enable 100s of connections over available bandwidth. Each download connection will use 4MB of active ram.

            limit_rate_after 2m;
            limit_rate 100k;
    

    Both of above solution will scale easily to 1k+ simultaneous user on a 3 disk server. Assuming you have 1Gbps bandwidth and each connection is throttled at 1Mb/ps There is additional setup needed to optimize disk writes without affecting reads much.

    make all Uploads to main os disk on a mount say /tmpuploads. this will ensure no intermittent disturbance while heavy reads are going on. Then move the file from /tmpuploads using "dd " command with oflag=direct. something like

    dd if=/tmpuploads/ of=/raidmount/uploads/ oflag=direct bs=8196k
    

提交回复
热议问题