Best practices for keeping the web server data protected

前端 未结 5 968
不思量自难忘°
不思量自难忘° 2021-02-02 00:37

Lets say I run a medical facility and want a website where my users/patients can lookup their private records. What would be the best solution

相关标签:
5条回答
  • 2021-02-02 01:19

    Your question is What are the best practices for such architecture?

    I like this article from Microsoft Security Best Practices to Protect Internet Facing Web Servers, which has had 11 revisions. Granted some of it is Microsoft-platform specific, a lot of the concepts you can apply to a platform-independent solution.

    1. Identify the network flow, in terms of requests: if you know the regular network flow the server is supposed to receive and send, then you can allow and check (content/requests inspection) them, while other traffic/flow would be denied by default (by Firewall). This is a network isolation measure, that will reduce the risk of a malware spread (or a successful intrusion getting deeper into the production network)
    2. Make sure your DMZ has no possibility to directly access your LAN with "source to any" or "source to many"-like rule (firewall/routers rules to be double-checked).
    3. Make sure there is no way to directly request your web server, bypassing security filtering layers. There should be at least a 3-layers filter for your web server:
      1. Protocols and sources acccepted: firewall (and routers).
      2. Dynamic network traffic inspection: NIPS (Network Intrusion Protection System) that will detect/block malicious network requests. You might want to have a look at the MAPP to find a Microsoft partner (www.microsoft.com/security/mapp/ This link is external to TechNet Wiki. It will open in a new window. ). Please also keep in mind that NIDS will only aim to detect, not block, the malicious traffic (contrary to NIPS), but in the other hand they will not create any denial of service risk for business flows.
      3. Application-oriented security: WAF (Web Application Firewall), just next to the web app/site, that will allow to harden the requests control, and tighten the filter to match the specificities of the web application. ModSecurity for IIS7 (see: http://www.modsecurity.org/ This link is external to TechNet Wiki. It will open in a new window. ) is an example of a tool that can be used for robust audit logging of HTTP(S) transactions and virtual patching of identified vulnerabilities. Along with the bundled OWASP ModSecurity Core Rule Set (CRS), it offers essential protections against application layer attacks and information leakages.
    4. Make sure that clients can't directly send requests to your server (from a TCP point of view), that could facilitate attacks otherwise. Thus ensure network isolation, DMZ-minded, by deploying a reverse proxy as a front-end of the web server. This will allow to more easily manage the network flow that can legitimately be sent to the server (including other needs like load balancing). Forefront UAG could be an example of such a solution, or any other one from the MAPP program. Note that some reverse proxies may offer advanced security features.
    5. Follow security best practices for ASP.Net code, to protect against code injection: http://msdn.microsoft.com/en-us/magazine/hh580736.aspx This link is external to TechNet Wiki. It will open in a new window. and SQL injection: http://msdn.microsoft.com/en-us/library/ms161953(SQL.105).aspx This link is external to TechNet Wiki. It will open in a new window. . From a more global point of view, please refer to SDL: http://msdn.microsoft.com/en-us/security/aa570401.aspx This link is external to TechNet Wiki. It will open in a new window. . Audit the hosted code on a regular basis.
    6. Harden cyphered network communications as much as possible, taking into account the available implementations of SSL/TLS on the Windows systems you are running: http://blogs.msdn.com/b/benjaminperkins/archive/2011/10/07/secure-channel-compatibility-support-with-ssl-and-tls.aspx This link is external to TechNet Wiki. It will open in a new window. . By default, our recommandation is TLS 1.1/1.2. Please keep in mind this has to be enabled on both client and server side.
    7. Make sure that the machines within the DMZ are not joined to the regular production domain. AD isolation is at forest layer, therefore it is highly recommended not to have the production AD in DMZ. Please either use another forest, or deploy AD Lightweight Directory Services.
    8. Implement white/blacklisting of applications, through AppLocker for example: http://technet.microsoft.com/en-us/library/ee791890(v=ws.10).aspx This link is external to TechNet Wiki. It will open in a new window.
    9. Make sure you have got all the relevant (and required?) traceability chain: this meaning possible correlation between firewall's, reverse-proxy's, and web server's logs. Please make attention not to only enable "errors" logging, for instance in IIS logs. Last, please consider archiving the logs.
    10. Create a back-up of web server data, on a regular basis.
    11. Create images of systems, in an integer state, on a regular basis (at least, at deployment time). This may be helpful in case of a security incident, both to return to production mode as quick as possible, and also to investigate.
    12. Audit your equipments: firewall rules, NIPS rules, WAF rules, reverse-proxy settings, on a regular basis.
    13. Follow security best practices for application layer products, database layer ones, and web server layer.

    reference: http://social.technet.microsoft.com/wiki/contents/articles/13974.security-best-practices-to-protect-internet-facing-web-servers.aspx

    0 讨论(0)
  • 2021-02-02 01:20

    First you need to identify the attacks that you want to try and protect against, and then address each of them individually. Since you mention "most-common attacks", we will start there; here is a quick list for a common three-tiered service (client-web-datastore):

    1. Corrupted Inputs (manual or fuzzed)
    2. SQL Injection
    3. Cross site scripting attacks (XSS)
    4. Guessing: Brute force, dictionary, rainbow tables, etc.
    5. Onsite (employee) leaks
    6. Social engineering
    7. Man-in-the-middle
    8. Cross site forgery attacks (CSRF)
    9. Replay attacks

    Once a leak or breach happens, these are some of the issues that make it easier for the attackers, and thus should also be addressed:

    • Data stored in plain-text
    • Weak passwords/keys
    • Weak encryption or hashes
    • No salting
    • No separation of services (e.g. placing a database on the same physical box as the web server)

    Now we look at common mitigations:

    1-3 (Inputs, SQL Injection, XSS) deal a lot with bad inputs. So all inputs from a client need to be sanitized, and (attack-focused) testing needs to be performed to ensure the code works correctly.

    4 (Guessing) Automated tools will be used to try and guess a users' password, or if they have the data already, they will try to force the key or hash. Mitigations involve choosing the correct algorithm for the encryption or hash. Increasing the bit size of the key. Enforcing policies on password/key complexity. Using salts. Limiting the number of attempts per second. etc.

    5 (Leaks) If the data is encrypted onsite, and the admins/employees/janitors do not have the keys to decrypt the data, then leaking the information is of limited value (especially if #4 is handled correctly). You can also place limitations on who and how the data can be accessed (the NSA just learned a valuable lesson here and are enacting policies to ensure that two people need to be present to access private data). Proper journaling and logging of access attempts is also important.

    6 (Social Engineering) Attackers will attempt to call your support desk, impersonate a client, and either request access to privileged information or have the support desk change information (password, personal information, etc). They will often chain together multiple support calls until the have all the information needed to take control of an account. Support needs to be trained and limited in what information they will give out, as well as what data they can edit.

    7 (Man-in-the-middle) This is where an attacker will attempt to inject himself into the flow of communication, most commonly through the use of rootkits running on client's machines or fake access points (wifi, for example). Wire/protocol based encryption (such as SSL) obviously is the first level of protection. But variants (such as man-in-the-browser) won't be mitigated as they will see the data after the SSL packets have been decrypted. In general, clients cannot be trusted, because the platforms themselves are insecure. Encouraging users to use dedicated/isolated machines is a good practice. Limit the amount of time that keys and decrypted data are stored in memory or other accessible locations.

    8-9 (CSRF and Replay) Similar to man-in-the-middle, these attacks will attempt to duplicate (e.g. capture) a user's credentials, and/or transactions and reuse them. Authentication against the client origin, limiting the window when credentials are valid, requiring validation of the transaction (via a separate channel such as email, phone, SMS, etc) all help to reduce these attacks.



    Proper encryption/hashing/salting is probably the first thing that companies screw up. Assuming all your other defense fall (and like you said, they probably will), this is your last hope. Invest here and ensure that this is done properly. Ensure that individual user records are encoded with different keys (not one master key). Having the client do the encryption/decryption can solve a lot of security issues as the server never knows the keys (so nobody can steal them). On the other hand, if the client looses the keys, then they will loose their data as well. So a trade off will have to be made there.

    Invest in testing/attacking your solution. The engineer that implements a website/database is often not equipped to think about all the possible attack scenarios.

    0 讨论(0)
  • 2021-02-02 01:25

    While josh poley's and Bala Subramanyam's are good answers, I would add that, if the core value of your business is security you should:

    • Hire the best security hackers out of there, pay them very well and make them proud of working for your company
    • Hire the best programmers out of there, pay them very well and make them proud of working for your company
    • Host your servers into your own buildings, possibly at different longitudes

    Hackers and developers will be your main asset, and they should know that. Indeed we can list most common security practices here, but applying our suggestions you won't make your system truly secure, just funny to hack.

    When security matters, great talents, passion and competence are your only protection.

    0 讨论(0)
  • 2021-02-02 01:25

    This is what I'm thinking:

    All records are store in my home computer (offline) encrypted with my personal key. Within this computer there's the patient records and a private and a public key for each user. This computer uploads new data, as is, encrypter to the webserver.

    The webserver only contains encrypted data.

    I supply the public key to my users. Be it using email sent from somewhere else, or even by snail mail.

    Webserver decrypts data on every request. Because the users password is its public key, decription on the server can only happen while there's an active session.

    Because there's asymetric keys in play, I can even insert new encrypted data on the webserver (user input) and later fetch it to my offline-computer.

    Downside: Requesting a new password requires the offline-computer to upload re-encrypted data, and to send a new password somehow.

    Upside: Makes the webserver security concerns less relevant.

    Is this the best solution?

    0 讨论(0)
  • 2021-02-02 01:38

    Ok I will just try to build up a little on what you already proposed. Firstly you might want to research the technologies behind mega website; it uses presumably exactly what you'd be interested. On the fly JS based encryption however still does have some weaknesses. That being said it would not be easy to implement on the fly decryption of the records with js and html, not impossible though. Thus yes I would say you are generally thinking in the right direction.

    Regardless you would have to consider all the common attack techniques and defenses (website attacks, server attacks etc.), but this topic is way too broad to be covered fully and completely in a single answer. And needless to say those are already very well covered in other answers.

    As for 'architecture', if you are really paranoid you could also have the database on a separate server, which runs the database on an obscure port and allows incoming connections only from the webserver.

    0 讨论(0)
提交回复
热议问题