Optimizing away a “while(1);” in C++0x

前端 未结 8 1157
生来不讨喜
生来不讨喜 2020-11-22 10:37

Updated, see below!

I have heard and read that C++0x allows an compiler to print \"Hello\" for the following snippet

#include 

        
相关标签:
8条回答
  • 2020-11-22 10:51

    It is not decidable for the compiler for non-trivial cases if it is an infinite loop at all.

    In different cases, it can happen that your optimiser will reach a better complexity class for your code (e.g. it was O(n^2) and you get O(n) or O(1) after optimisation).

    So, to include such a rule that disallows removing an infinite loop into the C++ standard would make many optimisations impossible. And most people don't want this. I think this quite answers your question.


    Another thing: I never have seen any valid example where you need an infinite loop which does nothing.

    The one example I have heard about was an ugly hack that really should be solved otherwise: It was about embedded systems where the only way to trigger a reset was to freeze the device so that the watchdog restarts it automatically.

    If you know any valid/good example where you need an infinite loop which does nothing, please tell me.

    0 讨论(0)
  • 2020-11-22 10:54

    To me, the relevant justification is:

    This is intended to allow compiler transfor- mations, such as removal of empty loops, even when termination cannot be proven.

    Presumably, this is because proving termination mechanically is difficult, and the inability to prove termination hampers compilers which could otherwise make useful transformations, such as moving nondependent operations from before the loop to after or vice versa, performing post-loop operations in one thread while the loop executes in another, and so on. Without these transformations, a loop might block all other threads while they wait for the one thread to finish said loop. (I use "thread" loosely to mean any form of parallel processing, including separate VLIW instruction streams.)

    EDIT: Dumb example:

    while (complicated_condition()) {
        x = complicated_but_externally_invisible_operation(x);
    }
    complex_io_operation();
    cout << "Results:" << endl;
    cout << x << endl;
    

    Here, it would be faster for one thread to do the complex_io_operation while the other is doing all the complex calculations in the loop. But without the clause you have quoted, the compiler has to prove two things before it can make the optimisation: 1) that complex_io_operation() doesn't depend on the results of the loop, and 2) that the loop will terminate. Proving 1) is pretty easy, proving 2) is the halting problem. With the clause, it may assume the loop terminates and get a parallelisation win.

    I also imagine that the designers considered that the cases where infinite loops occur in production code are very rare and are usually things like event-driven loops which access I/O in some manner. As a result, they have pessimised the rare case (infinite loops) in favour of optimising the more common case (noninfinite, but difficult to mechanically prove noninfinite, loops).

    It does, however, mean that infinite loops used in learning examples will suffer as a result, and will raise gotchas in beginner code. I can't say this is entirely a good thing.

    EDIT: with respect to the insightful article you now link, I would say that "the compiler may assume X about the program" is logically equivalent to "if the program doesn't satisfy X, the behaviour is undefined". We can show this as follows: suppose there exists a program which does not satisfy property X. Where would the behaviour of this program be defined? The Standard only defines behaviour assuming property X is true. Although the Standard does not explicitly declare the behaviour undefined, it has declared it undefined by omission.

    Consider a similar argument: "the compiler may assume a variable x is only assigned to at most once between sequence points" is equivalent to "assigning to x more than once between sequence points is undefined".

    0 讨论(0)
  • 2020-11-22 10:55

    I think it's worth pointing out that loops which would be infinite except for the fact that they interact with other threads via non-volatile, non-synchronised variables can now yield incorrect behaviour with a new compiler.

    I other words, make your globals volatile -- as well as arguments passed into such a loop via pointer/reference.

    0 讨论(0)
  • 2020-11-22 10:57

    The relevant issue is that the compiler is allowed to reorder code whose side effects do not conflict. The surprising order of execution could occur even if the compiler produced non-terminating machine code for the infinite loop.

    I believe this is the right approach. The language spec defines ways to enforce order of execution. If you want an infinite loop that cannot be reordered around, write this:

    volatile int dummy_side_effect;
    
    while (1) {
        dummy_side_effect = 0;
    }
    
    printf("Never prints.\n");
    
    0 讨论(0)
  • 2020-11-22 11:09

    I think the issue could perhaps best be stated, as "If a later piece of code does not depend on an earlier piece of code, and the earlier piece of code has no side-effects on any other part of the system, the compiler's output may execute the later piece of code before, after, or intermixed with, the execution of the former, even if the former contains loops, without regard for when or whether the former code would actually complete. For example, the compiler could rewrite:

    void testfermat(int n)
    {
      int a=1,b=1,c=1;
      while(pow(a,n)+pow(b,n) != pow(c,n))
      {
        if (b > a) a++; else if (c > b) {a=1; b++}; else {a=1; b=1; c++};
      }
      printf("The result is ");
      printf("%d/%d/%d", a,b,c);
    }
    

    as

    void testfermat(int n)
    {
      if (fork_is_first_thread())
      {
        int a=1,b=1,c=1;
        while(pow(a,n)+pow(b,n) != pow(c,n))
        {
          if (b > a) a++; else if (c > b) {a=1; b++}; else {a=1; b=1; c++};
        }
        signal_other_thread_and_die();
      }
      else // Second thread
      {
        printf("The result is ");
        wait_for_other_thread();
      }
      printf("%d/%d/%d", a,b,c);
    }
    

    Generally not unreasonable, though I might worry that:

      int total=0;
      for (i=0; num_reps > i; i++)
      {
        update_progress_bar(i);
        total+=do_something_slow_with_no_side_effects(i);
      }
      show_result(total);
    

    would become

      int total=0;
      if (fork_is_first_thread())
      {
        for (i=0; num_reps > i; i++)
          total+=do_something_slow_with_no_side_effects(i);
        signal_other_thread_and_die();
      }
      else
      {
        for (i=0; num_reps > i; i++)
          update_progress_bar(i);
        wait_for_other_thread();
      }
      show_result(total);
    

    By having one CPU handle the calculations and another handle the progress bar updates, the rewrite would improve efficiency. Unfortunately, it would make the progress bar updates rather less useful than they should be.

    0 讨论(0)
  • 2020-11-22 11:11

    I think the correct interpretation is the one from your edit: empty infinite loops are undefined behavior.

    I wouldn't say it's particularly intuitive behavior, but this interpretation makes more sense than the alternative one, that the compiler is arbitrarily allowed to ignore infinite loops without invoking UB.

    If infinite loops are UB, it just means that non-terminating programs aren't considered meaningful: according to C++0x, they have no semantics.

    That does make a certain amount of sense too. They are a special case, where a number of side effects just no longer occur (for example, nothing is ever returned from main), and a number of compiler optimizations are hampered by having to preserve infinite loops. For example, moving computations across the loop is perfectly valid if the loop has no side effects, because eventually, the computation will be performed in any case. But if the loop never terminates, we can't safely rearrange code across it, because we might just be changing which operations actually get executed before the program hangs. Unless we treat a hanging program as UB, that is.

    0 讨论(0)
提交回复
热议问题