Compiling an application for use in highly radioactive environments

后端 未结 23 1618
名媛妹妹
名媛妹妹 2020-12-02 03:30

We are compiling an embedded C++ application that is deployed in a shielded device in an environment bombarded with ionizing radiation. We are using GCC and cross-compiling

相关标签:
23条回答
  • 2020-12-02 04:00

    Perhaps it would help to know does it mean for the hardware to be "designed for this environment". How does it correct and/or indicates the presence of SEU errors ?

    At one space exploration related project, we had a custom MCU, which would raise an exception/interrupt on SEU errors, but with some delay, i.e. some cycles may pass/instructions be executed after the one insn which caused the SEU exception.

    Particularly vulnerable was the data cache, so a handler would invalidate the offending cache line and restart the program. Only that, due to the imprecise nature of the exception, the sequence of insns headed by the exception raising insn may not be restartable.

    We identified the hazardous (not restartable) sequences (like lw $3, 0x0($2), followed by an insn, which modifies $2 and is not data-dependent on $3), and I made modifications to GCC, so such sequences do not occur (e.g. as a last resort, separating the two insns by a nop).

    Just something to consider ...

    0 讨论(0)
  • 2020-12-02 04:03

    You want 3+ slave machines with a master outside the radiation environment. All I/O passes through the master which contains a vote and/or retry mechanism. The slaves must have a hardware watchdog each and the call to bump them should be surrounded by CRCs or the like to reduce the probability of involuntary bumping. Bumping should be controlled by the master, so lost connection with master equals reboot within a few seconds.

    One advantage of this solution is that you can use the same API to the master as to the slaves, so redundancy becomes a transparent feature.

    Edit: From the comments I feel the need to clarify the "CRC idea." The possibilty of the slave bumping it's own watchdog is close to zero if you surround the bump with CRC or digest checks on random data from the master. That random data is only sent from master when the slave under scrutiny is aligned with the others. The random data and CRC/digest are immediately cleared after each bump. The master-slave bump frequency should be more than double the watchdog timeout. The data sent from the master is uniquely generated every time.

    0 讨论(0)
  • 2020-12-02 04:03

    Here are huge amount of replies, but I'll try to sum up my ideas about this.

    Something crashes or does not work correctly could be result of your own mistakes - then it should be easily to fix when you locate the problem. But there is also possibility of hardware failures - and that's difficult if not impossible to fix in overall.

    I would recommend first to try to catch the problematic situation by logging (stack, registers, function calls) - either by logging them somewhere into file, or transmitting them somehow directly ("oh no - I'm crashing").

    Recovery from such error situation is either reboot (if software is still alive and kicking) or hardware reset (e.g. hw watchdogs). Easier to start from first one.

    If problem is hardware related - then logging should help you to identify in which function call problem occurs and that can give you inside knowledge of what is not working and where.

    Also if code is relatively complex - it makes sense to "divide and conquer" it - meaning you remove / disable some function calls where you suspect problem is - typically disabling half of code and enabling another half - you can get "does work" / "does not work" kind of decision after which you can focus into another half of code. (Where problem is)

    If problem occurs after some time - then stack overflow can be suspected - then it's better to monitor stack point registers - if they constantly grows.

    And if you manage to fully minimize your code until "hello world" kind of application - and it's still failing randomly - then hardware problems are expected - and there needs to be "hardware upgrade" - meaning invent such cpu / ram / ... -hardware combination which would tolerate radiation better.

    Most important thing is probably how you get your logs back if machine fully stopped / resetted / does not work - probably first thing bootstap should do - is a head back home if problematic situation is entcovered.

    If it's possible in your environment also to transmit a signal and receive response - you could try out to construct some sort of online remote debugging environment, but then you must have at least of communication media working and some processor/ some ram in working state. And by remote debugging I mean either GDB / gdb stub kind of approach or your own implementation of what you need to get back from your application (e.g. download log files, download call stack, download ram, restart)

    0 讨论(0)
  • 2020-12-02 04:05

    It may be possible to use C to write programs that behave robustly in such environments, but only if most forms of compiler optimization are disabled. Optimizing compilers are designed to replace many seemingly-redundant coding patterns with "more efficient" ones, and may have no clue that the reason the programmer is testing x==42 when the compiler knows there's no way x could possibly hold anything else is because the programmer wants to prevent the execution of certain code with x holding some other value--even in cases where the only way it could hold that value would be if the system received some kind of electrical glitch.

    Declaring variables as volatile is often helpful, but may not be a panacea. Of particular importance, note that safe coding often requires that dangerous operations have hardware interlocks that require multiple steps to activate, and that code be written using the pattern:

    ... code that checks system state
    if (system_state_favors_activation)
    {
      prepare_for_activation();
      ... code that checks system state again
      if (system_state_is_valid)
      {
        if (system_state_favors_activation)
          trigger_activation();
      }
      else
        perform_safety_shutdown_and_restart();
    }
    cancel_preparations();
    

    If a compiler translates the code in relatively literal fashion, and if all the checks for system state are repeated after the prepare_for_activation(), the system may be robust against almost any plausible single glitch event, even those which would arbitrarily corrupt the program counter and stack. If a glitch occurs just after a call to prepare_for_activation(), that would imply that activation would have been appropriate (since there's no other reason prepare_for_activation() would have been called before the glitch). If the glitch causes code to reach prepare_for_activation() inappropriately, but there are no subsequent glitch events, there would be no way for code to subsequently reach trigger_activation() without having passed through the validation check or calling cancel_preparations first [if the stack glitches, execution might proceed to a spot just before trigger_activation() after the context that called prepare_for_activation() returns, but the call to cancel_preparations() would have occurred between the calls to prepare_for_activation() and trigger_activation(), thus rendering the latter call harmless.

    Such code may be safe in traditional C, but not with modern C compilers. Such compilers can be very dangerous in that sort of environment because aggressive they strive to only include code which will be relevant in situations that could come about via some well-defined mechanism and whose resulting consequences would also be well defined. Code whose purpose would be to detect and clean up after failures may, in some cases, end up making things worse. If the compiler determines that the attempted recovery would in some cases invoke undefined behavior, it may infer that the conditions that would necessitate such recovery in such cases cannot possibly occur, thus eliminating the code that would have checked for them.

    0 讨论(0)
  • 2020-12-02 04:05

    I've really read a lot of great answers!

    Here is my 2 cent: build a statistical model of the memory/register abnormality, by writing a software to check the memory or to perform frequent register comparisons. Further, create an emulator, in the style of a virtual machine where you can experiment with the issue. I guess if you vary junction size, clock frequency, vendor, casing, etc would observe a different behavior.

    Even our desktop PC memory has a certain rate of failure, which however doesn't impair the day to day work.

    0 讨论(0)
提交回复
热议问题