I have been thinking about a scenario where one lets users (can be anyone, possibly with bad intentions) submit code which is run on a Linux PC (let's call it the benchmark node). The goal is to make a kind of automated benchmarking environment for single-threaded routines. Let's say that a website posts some code to a proxy. This proxy hands this code to the benchmark node, and the benchmark node only has an ethernet connection to the proxy, not internet itself.
If one lets whatever user post C/asm code to be run on the benchmark node, what security challenges will one face? The following assumptions are made:
- The program is run as an unprivileged user
- The proxy will have the opportunity to kill the process on the benchmark node (take the scenario of an infinite loop for instance)
- The proxy is able to restart the benchmark node (if it replies...)
So, is it in practice possible that this user space program can make the OS crash, or make the machine unavailable to the proxy? With assembly the programmer can do basically whatever he wants (manipulate stack pointer for instance), and I wonder how restrictive/robust Linux is in this respect. I also know about the possibility for processes to request shared memory regions with other processes (shm), which might also play a role here?
Any literature or articles about this subject are very welcome.
Sandbox solutions might also be interesting, but it's important that the CPU must perform 100% of what it is capable of during the benchmark (at least on the core the benchmark is run).
Just a quick list off the top of my head. Essentially, if you do not trust the users at least a little, you are in deep trouble:
- Filesystem manipulation: delete or overwrite files belonging to the user the process is run as
- Snooping all sorts of data found on the system (files, sometimes network traffic of same user)
- Killing the user's other processes
- Consuming memory until OOM Killer starts killing random processes or (if you have swap enabled) until the machine slows down to a crawl
- Generating lots of I/O to slow down the system
- Executing exploits at will (you are close to certain to have some unpatched priviledge escalation vulnerability somewhere)
- Exploiting vulnerabilities in any software the user is able to run
- Hosting a DDoS network or child pornography file server on your machine
- Using your machine as a proxy for starting attacks against CIA and FBI servers
- The sky is the limit...
Doesn't sound like a good idea.
So, is it in practice possible that this user space program can make the OS crash, or make the machine unavailable to the proxy?
Yes, such techniques as spawning an excessive number of processes, allocating excessive memory (causing swapfile use), or queuing up a lot of disk I/O will make the machine unresponsive so that your supervisor process won't run in a timely fashion.
If your supervisor code ends up swapped out to the disk, then even if it has high priority, it won't run until the disk becomes available, which can be a very long delay due to seek times.
Linux does have ulimit
which can protect against some of these, see Limit the memory and cpu available for a user in Linux And malicious network activity can be likewise blocked. You can also disable swap and chroot
the program into a tmpfs
mount. But some mischief will still be possible.
So, is it in practice possible that this user space program can make the OS crash, or make the machine unavailable to the proxy?
Well, in theory you should have a hard time making the OS crash. However, there are many many bug reports out there that say it's more possible in practice than we would like.
Without special precautions, on the other hand, it's going to be fairly easy to achieve denial of service. Imagine a user program that did nothing but flood the proxy with packets; that alone might, if not achieve outright denial of service, then make things embarrassingly slow.
With assembly the programmer can do basically whatever he wants (manipulate stack pointer for instance), and I wonder how restrictive/robust Linux is in this respect.
However, we're a lot better than that. If all you needed for privilege escalation was "mess with the stack pointer" security as a field would be a total joke. The kernel is intended to be written so that no program, no matter what, can cause the kernel to crash. As noted, it is imperfect.
The moral of the story is that you really don't want to be running untrusted code on a computer you care about. The stock answer here would be a checkpointed VM: start a virtual machine, run the untrusted code on the virtual machine, and then after completion or timeout blow the virtual machine away. That way persistent damage is impossible. As far as other abuse goes, your proxy will prevent them from hosting seedy internet services, which is good. Depending on your VM situation there may be good tools for limiting CPU consumption and network usage as well, which will help eliminate other denial-of-service possibilities.
You mention needing the CPU to perform at full capacity. Hardware virtualization is quite good, and performance should reasonably reflect what it would be on a real system.
Nothing above is Linux-specific, by the way; it should be true of all credible general-purpose operating systems.
edit: If you are truly insistent on running directly on hardware, then:
- boot from a read-only device (livecd or writeblocked hard drive)
- have no writeable media in the system
- Add a lights-out server that can forcibly reset the machine at the proxy's request, in case of denial of service; commercial solutions exist for this
That's essentially giving you the features of the VM solution, but on hardware.
If the code is running under a limited account on a correctly configured machine, it should resist many basic types of attack (either accidental or malicious).
The fact the programmer can use assembly is irrelevant - attacks can be coded in many different languages - compiled or otherwise.
The main problem is unknown security issues or 0day vulnerabilities. Allowing any unauthorised program to run is a risk and if someone manages to exploit an issue which allows privilege elevation, you're screwed.
Sandboxes are generally advised because they both restrict what the application can do and (if designed correctly) minimize damage from rogue behaviour.
A program group can exert memory pressure causing the machine becoming unresponsive (esp. when swapping to disk starts to occur). Example code: perl -e '$_.="x"x1000000 while fork'
来源:https://stackoverflow.com/questions/9506596/what-harm-can-a-c-asm-program-do-to-linux-when-run-by-an-unprivileged-user