Compiling an application for use in highly radioactive environments

后端 未结 23 1617
名媛妹妹
名媛妹妹 2020-12-02 03:30

We are compiling an embedded C++ application that is deployed in a shielded device in an environment bombarded with ionizing radiation. We are using GCC and cross-compiling

相关标签:
23条回答
  • 2020-12-02 03:51

    If your hardware fails then you can use mechanical storage to recover it. If your code base is small and have some physical space then you can use a mechanical data store.

    There will be a surface of material which will not be affected by radiation. Multiple gears will be there. A mechanical reader will run on all the gears and will be flexible to move up and down. Down means it is 0 and up means it is 1. From 0 and 1 you can generate your code base.

    0 讨论(0)
  • 2020-12-02 03:53

    How about running many instances of your application. If crashes are due to random memory bit changes, chances are some of your app instances will make it through and produce accurate results. It's probably quite easy (for someone with statistical background) to calculate how many instances do you need given bit flop probability to achieve as tiny overall error as you wish.

    0 讨论(0)
  • 2020-12-02 03:54

    What could help you is a watchdog. Watchdogs were used extensively in industrial computing in the 1980s. Hardware failures were much more common then - another answer also refers to that period.

    A watchdog is a combined hardware/software feature. The hardware is a simple counter that counts down from a number (say 1023) to zero. TTL or other logic could be used.

    The software has been designed as such that one routine monitors the correct operation of all essential systems. If this routine completes correctly = finds the computer running fine, it sets the counter back to 1023.

    The overall design is so that under normal circumstances, the software prevents that the hardware counter will reach zero. In case the counter reaches zero, the hardware of the counter performs its one-and-only task and resets the entire system. From a counter perspective, zero equals 1024 and the counter continues counting down again.

    This watchdog ensures that the attached computer is restarted in a many, many cases of failure. I must admit that I'm not familiar with hardware that is able to perform such a function on today's computers. Interfaces to external hardware are now a lot more complex than they used to be.

    An inherent disadvantage of the watchdog is that the system is not available from the time it fails until the watchdog counter reaches zero + reboot time. While that time is generally much shorter than any external or human intervention, the supported equipment will need to be able to proceed without computer control for that timeframe.

    0 讨论(0)
  • 2020-12-02 03:54

    What you ask is quite complex topic - not easily answerable. Other answers are ok, but they covered just a small part of all the things you need to do.

    As seen in comments, it is not possible to fix hardware problems 100%, however it is possible with high probabily to reduce or catch them using various techniques.

    If I was you, I would create the software of the highest Safety integrity level level (SIL-4). Get the IEC 61513 document (for the nuclear industry) and follow it.

    0 讨论(0)
  • 2020-12-02 03:56

    NASA has a paper on radiation-hardened software. It describes three main tasks:

    1. Regular monitoring of memory for errors then scrubbing out those errors,
    2. robust error recovery mechanisms, and
    3. the ability to reconfigure if something no longer works.

    Note that the memory scan rate should be frequent enough that multi-bit errors rarely occur, as most ECC memory can recover from single-bit errors, not multi-bit errors.

    Robust error recovery includes control flow transfer (typically restarting a process at a point before the error), resource release, and data restoration.

    Their main recommendation for data restoration is to avoid the need for it, through having intermediate data be treated as temporary, so that restarting before the error also rolls back the data to a reliable state. This sounds similar to the concept of "transactions" in databases.

    They discuss techniques particularly suitable for object-oriented languages such as C++. For example

    1. Software-based ECCs for contiguous memory objects
    2. Programming by Contract: verifying preconditions and postconditions, then checking the object to verify it is still in a valid state.

    And, it just so happens, NASA has used C++ for major projects such as the Mars Rover.

    C++ class abstraction and encapsulation enabled rapid development and testing among multiple projects and developers.

    They avoided certain C++ features that could create problems:

    1. Exceptions
    2. Templates
    3. Iostream (no console)
    4. Multiple inheritance
    5. Operator overloading (other than new and delete)
    6. Dynamic allocation (used a dedicated memory pool and placement new to avoid the possibility of system heap corruption).
    0 讨论(0)
  • 2020-12-02 03:59

    Disclaimer: I'm not a radioactivity professional nor worked for this kind of application. But I worked on soft errors and redundancy for long term archival of critical data, which is somewhat linked (same problem, different goals).

    The main problem with radioactivity in my opinion is that radioactivity can switch bits, thus radioactivity can/will tamper any digital memory. These errors are usually called soft errors, bit rot, etc.

    The question is then: how to compute reliably when your memory is unreliable?

    To significantly reduce the rate of soft errors (at the expense of computational overhead since it will mostly be software-based solutions), you can either:

    • rely on the good old redundancy scheme, and more specifically the more efficient error correcting codes (same purpose, but cleverer algorithms so that you can recover more bits with less redundancy). This is sometimes (wrongly) also called checksumming. With this kind of solution, you will have to store the full state of your program at any moment in a master variable/class (or a struct?), compute an ECC, and check that the ECC is correct before doing anything, and if not, repair the fields. This solution however does not guarantee that your software can work (simply that it will work correctly when it can, or stops working if not, because ECC can tell you if something is wrong, and in this case you can stop your software so that you don't get fake results).

    • or you can use resilient algorithmic data structures, which guarantee, up to a some bound, that your program will still give correct results even in the presence of soft errors. These algorithms can be seen as a mix of common algorithmic structures with ECC schemes natively mixed in, but this is much more resilient than that, because the resiliency scheme is tightly bounded to the structure, so that you don't need to encode additional procedures to check the ECC, and usually they are a lot faster. These structures provide a way to ensure that your program will work under any condition, up to the theoretical bound of soft errors. You can also mix these resilient structures with the redundancy/ECC scheme for additional security (or encode your most important data structures as resilient, and the rest, the expendable data that you can recompute from the main data structures, as normal data structures with a bit of ECC or a parity check which is very fast to compute).

    If you are interested in resilient data structures (which is a recent, but exciting, new field in algorithmics and redundancy engineering), I advise you to read the following documents:

    • Resilient algorithms data structures intro by Giuseppe F.Italiano, Universita di Roma "Tor Vergata"

    • Christiano, P., Demaine, E. D., & Kishore, S. (2011). Lossless fault-tolerant data structures with additive overhead. In Algorithms and Data Structures (pp. 243-254). Springer Berlin Heidelberg.

    • Ferraro-Petrillo, U., Grandoni, F., & Italiano, G. F. (2013). Data structures resilient to memory faults: an experimental study of dictionaries. Journal of Experimental Algorithmics (JEA), 18, 1-6.

    • Italiano, G. F. (2010). Resilient algorithms and data structures. In Algorithms and Complexity (pp. 13-24). Springer Berlin Heidelberg.

    If you are interested in knowing more about the field of resilient data structures, you can checkout the works of Giuseppe F. Italiano (and work your way through the refs) and the Faulty-RAM model (introduced in Finocchi et al. 2005; Finocchi and Italiano 2008).

    /EDIT: I illustrated the prevention/recovery from soft-errors mainly for RAM memory and data storage, but I didn't talk about computation (CPU) errors. Other answers already pointed at using atomic transactions like in databases, so I will propose another, simpler scheme: redundancy and majority vote.

    The idea is that you simply do x times the same computation for each computation you need to do, and store the result in x different variables (with x >= 3). You can then compare your x variables:

    • if they all agree, then there's no computation error at all.
    • if they disagree, then you can use a majority vote to get the correct value, and since this means the computation was partially corrupted, you can also trigger a system/program state scan to check that the rest is ok.
    • if the majority vote cannot determine a winner (all x values are different), then it's a perfect signal for you to trigger the failsafe procedure (reboot, raise an alert to user, etc.).

    This redundancy scheme is very fast compared to ECC (practically O(1)) and it provides you with a clear signal when you need to failsafe. The majority vote is also (almost) guaranteed to never produce corrupted output and also to recover from minor computation errors, because the probability that x computations give the same output is infinitesimal (because there is a huge amount of possible outputs, it's almost impossible to randomly get 3 times the same, even less chances if x > 3).

    So with majority vote you are safe from corrupted output, and with redundancy x == 3, you can recover 1 error (with x == 4 it will be 2 errors recoverable, etc. -- the exact equation is nb_error_recoverable == (x-2) where x is the number of calculation repetitions because you need at least 2 agreeing calculations to recover using the majority vote).

    The drawback is that you need to compute x times instead of once, so you have an additional computation cost, but's linear complexity so asymptotically you don't lose much for the benefits you gain. A fast way to do a majority vote is to compute the mode on an array, but you can also use a median filter.

    Also, if you want to make extra sure the calculations are conducted correctly, if you can make your own hardware you can construct your device with x CPUs, and wire the system so that calculations are automatically duplicated across the x CPUs with a majority vote done mechanically at the end (using AND/OR gates for example). This is often implemented in airplanes and mission-critical devices (see triple modular redundancy). This way, you would not have any computational overhead (since the additional calculations will be done in parallel), and you have another layer of protection from soft errors (since the calculation duplication and majority vote will be managed directly by the hardware and not by software -- which can more easily get corrupted since a program is simply bits stored in memory...).

    0 讨论(0)
提交回复
热议问题