How to limit the size of subprocess stdout and stderr in python

限于喜欢 提交于 2020-01-02 05:57:46

问题


I need to run applications submitted by users. My code looks like:

def run_app(app_path):
    inp = open("app.in", "r")
    otp = open("app.out", "w")

    return subprocess.call(app_path, stdout=otp, stdin=inp)

Now since I have no control over what users will submit, I want to restrict the size of the output of the application. Other things like trying to access unauthorized system resources and abusing of CPU cycles are being restricted by apparmor rule enforcement. The maximum time allowed to run is being handled by parent process (in python). Now a rogue application can still try to flood the server system by writing a lot of data to its stdout knowing the the stdout is being saved to a file.

I do not want to use AppArmors RLIMIT or anything in kernel mode for stdout/stderr files. It would be great to be able to do it from python using standard library.

I am currently thinking about creating a subclass of file and on each write check how much data has been written already to the stream. Or create a memory mapped file with maximum length set.

But I am feeling there may be a simpler way to restrict file size I do not see it yet.


回答1:


Subclassing file or creating other pseudo-file Python object will not work at all, since the file will be consumed in the subprocess - and therefore it must be an O.S. file, not a Python class object. Subprocess won't send your Python object for the other process to use.

And while Python has native and simple support to memory-mapping files, through the mmap module, memory mapping is not meant for this: you can specify the size of the file that is mirrored to memory, but you that does not limit writting to the file at all: excess data will simply be written to disk and not be mapped. (And, again, you pass to the subprocess the disk-file, not the mmap object). It would be possible to create a file with a sentinel value at some point, and keep a thread checking if the sentinel is overwritten, at which point, it could kill the subprocess - but I doubt that would be reliable.

Then, there are disk-activity monitoring tools, such as inotify: you could use pyinotify to a handler on your main process that would be called whenever the file is accessed. The downside: there is no event for 'file write' - just 'file acessed' - I am not sure if any of the possible events would be triggered by incremental writting of the file. And, nonetheless, if the child process would do all of its writting in a single system call, you'd just be notified too late anyway.

So, what I can think of that would work is: create a file in an artificially limited file-system. That way, the OS will block writing when max-size is exceeded.

Under Linux, you can pre-create a file with the desired size + some overhead, create an FS on it, and mount it with the "loop" interface - then just create your stdout and sterr files inside that filesystem, and call your child process.

You could pre-create and pre-mount a pool of such filesystems to be used as needed - or, you could even create them dynamically - but that would require the steps of creating the FS host file, creating a file-system structure on it (mkfs) and mounting it - all of this could be a lot of overhead.

All in all, maybe you are better simply using Apparmor's own rlimit settings.



来源:https://stackoverflow.com/questions/42172730/how-to-limit-the-size-of-subprocess-stdout-and-stderr-in-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!