Why is i = v[i++] undefined?

后端 未结 8 1478
生来不讨喜
生来不讨喜 2021-02-01 13:41

From the C++ (C++11) standard, §1.9.15 which discusses ordering of evaluation, is the following code example:

void g(int i, int* v) {
    i = v[i++]; // the beha         


        
8条回答
  •  孤街浪徒
    2021-02-01 14:05

    I would think that the subexpression i++ would be completely evaluated before the subexpression v[...] is evaluated

    But why would you think that?

    One historical reason for this code being UB is to allow compiler optimizations to move side-effects around anywhere between sequence points. The fewer sequence points, the more potential opportunities to optimize but the more confused programmers. If the code says:

    a = v[i++];
    

    The intention of the standard is that the code emitted can be:

    a = v[i];
    ++i;
    

    which might be two instructions where:

    tmp = i;
    ++i;
    a = v[tmp];
    

    would be more than two.

    The "optimized code" breaks when a is i, but the standard permits the optimization anyway, by saying that behavior of the original code is undefined when a is i.

    The standard easily could say that i++ must be evaluated before the assignment as you suggest. Then the behavior would be fully defined and the optimization would be forbidden. But that's not how C and C++ do business.

    Also beware that many examples raised in these discussions make it easier to tell that there's UB around than it is in general. This leads to people saying that it's "obvious" the behavior should be defined and the optimization forbidden. But consider:

    void g(int *i, int* v, int *dst) {
        *dst = v[(*i)++];
    }
    

    The behavior of this function is defined when i != dst, and in that case you'd want all the optimization you can get (which is why C99 introduces restrict, to allow more optimizations than C89 or C++ do). In order to give you the optimization, behavior is undefined when i == dst. The C and C++ standards tread a fine line when it comes to aliasing, between undefined behavior that's not expected by the programmer, and forbidding desirable optimizations that fail in certain cases. The number of questions about it on SO suggests that the questioners would prefer a bit less optimization and a bit more defined behavior, but it's still not simple to draw the line.

    Aside from whether the behavior is fully defined is the issue of whether it should be UB, or merely unspecified order of execution of certain well-defined operations corresponding to the sub-expressions. The reason C goes for UB is all to do with the idea of sequence points, and the fact that the compiler need not actually have a notion of the value of a modified object, until the next sequence point. So rather than constrain the optimizer by saying that "the" value changes at some unspecified point, the standard just says (to paraphrase): (1) any code that relies on the value of a modified object prior to the next sequence point, has UB; (2) any code that modifies a modified object has UB. Where a "modified object" is any object that would have been modified since the last sequence point in one or more of the legal orders of evaluation of the subexpressions.

    Other languages (e.g. Java) go the whole way and completely define the order of expression side-effects, so there's definitely a case against C's approach. C++ just doesn't accept that case.

提交回复
热议问题