Allocating a lot of file descriptors

六月ゝ 毕业季﹏ 提交于 2019-12-24 13:20:00

问题


I am interested in bringing a system down (for, say 15 minutes) by allocating a lot of file descriptors and causing Out-of-File-Descriptor failure. (Don't worry, I am not trying to hack into anything. This is for testing a service I am writing... to see how it behaves under other programs misbehaving.) Any best practices for that? Should I just keep saying fopen() in a infinite for loop? And after 15 minutes, I can kill the process? Does anybody have experience with this?

Update: I am running Linux and the program I am writing will have super user privileges.

Thanks, ~yogi


回答1:


Did you consider lowering with setrlimit RLIMIT_NOFILE the file descriptor limit before running your program?

This can be done simply with the bash ulimit -n builtin, in the same shell where you test your application, e.g.:

 ulimit -n 32

And it won't perturb much a lot of other services already running. Lowering that limit will make your application (run in the same shell) hurt it quickly (for your testing purposes).

On the entire system level you might also write into /proc/sys/fs/file-max e.g. with

echo 1024 > /proc/sys/fs/file-max



回答2:


Depends on OS implementation, but call fopen on same file from same process will not allocate new file description, but just increment reference counter.

I would recommend you to read something about stress testing

Here are some usable software(you don't tag any OS platform):

http://www.opensourcetesting.org/performance.php




回答3:


I had this happen once in normal use. I believe you run of inodes in linux. I don't know a faster way that just opening files. Just be careful, we locked our system up. It was a while ago so I don't remember what was trying to open a file, but things generally assume they can get a file handle and don't behave as well as they should in the case they can't. ~Ben




回答4:


My 2 cents:

1.Write a program that creates a lot of file descriptors. You can achieve it by one of the following methods:

(a)Opening lot of different files in your code
(b)Opening a lot of socket descriptors

(c)Creating a lot of threads

2.Now, keep spawning multiple instances of the program created in Step-1 (i.e. create multiple processes) using a shell script or something similar.

Note: In linux as well as most other operating systems, there is a limit on the number of file descriptors per process (In linux by default it is 1024 I guess. You can check it using ulimit -a). So, your process will just fail when you do this. I am really not so sure that just by increasing the number of file descriptor usage you can make the system go down.




回答5:


You can use mkstemp to get file descriptors of temporary files.



来源:https://stackoverflow.com/questions/10907984/allocating-a-lot-of-file-descriptors

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!