Is if(var == true) faster than if(var != false)?

后端 未结 10 960
礼貌的吻别
礼貌的吻别 2020-12-18 18:25

Pretty simple question. I know it would probably be a tiny optimization, but eventually you\'ll use enough if statements for it to matter.

EDIT: Thank you to those o

相关标签:
10条回答
  • 2020-12-18 18:45

    Did you know that on x86 processors it's more efficient to do x ^= x where x is a 32-bit integer, than it is to do x = 0? It's true, and of course has the same result. Hence any time one can see x = 0 in code, one can replace it with x ^= x and gain efficiency.

    Now, have you ever seen x ^= x in much code?

    The reason you haven't is not just because the efficiency gain is slight, but because this is precisely the sort of change that a compiler (if compiling to native code) or jitter (if compiling IL or similar) will make. Disassemble some x86 code and its not unusual to see the assembly equivalent of x ^= x, though the code that was compiled to do this almost certainly had x = 0 or perhaps something much more complicated like x = 4 >> 6 or x = 32 - y where analysis of the code shows that y will always contain 32 at this point, and so on.

    For this reason, even though x ^= x is known to be more efficient, the sole effect of it in the vast, vast majority of cases would be to make the code less readable (the only exception would be where doing x ^= y was entailed in an algorithm being used and we happened to be doing a case where x and y were the same here, in this case x ^= x would make the use of that algorithm clearer while x = 0 would hide it).

    In 99.999999% percent of cases the same is going to apply to your example. In the remaining 0.000001% cases it should but there's an efficiency difference between some strange sort of operator overrides and the compiler can't resolve one to the other. Indeed, 0.000001% is overstating the case, and is just mentioned because I'm pretty sure that if I tried hard enough I could write something where one was less efficient than the other. Normally people aren't trying hard to do so.

    If you ever look at your own code in reflector, you'll probably find a few cases where it looks very different to the code you wrote. The reason for this is that it is reverse-engineering the IL of your code, rather than your code itself, and indeed one thing you will often find is things like if(var == true) or if(var != false) being turned into if(var) or even into if(!var) with the if and else blocks reversed.

    Look deeper and you'll see that even further changes are done in that there is more than one way to skin the same cat. In particular, looking at the way switch statements gets converted to IL is interesting; sometimes it gets turned into the equivalent of a bunch of if-else if statements, and sometimes it gets turned into a lookup into a table of jumps that could be made, depending on which seemed more efficient in the case in question.

    Look deeper still and other changes are made when it gets compiled to native code.

    I'm not going to agree with those who talk of "premature optimisation" just because you ask about the performance difference between two different approaches, because knowledge of such differences is a good thing, it's only using that knowledge prematurely that is premature (by definition). But a change that is going to be compiled away is neither premature, nor an optimisation, its just a null change.

    0 讨论(0)
  • 2020-12-18 18:49

    The other answers are all good, I just wanted to add:

    This is not a meaningful question, because it assumes a 1:1 relation between the notation and the resulting IL or native code.

    There isn't. And that's true even in C++, and even in C. You have to go all the way down to native code to have such a question make sense.

    Edited to add:

    The developers of the first Fortran compiler (ca. 1957) were surprised one day when reviewing its output. It was emitting code that was not obviously correct (though it was); in essence, it was making optimization decisions that were not obviously correct (though they were).

    The moral of this story: compilers have been smarter than people for over 50 years. Don't try to outsmart them unless you're prepared to examine their output and/or do extensive performance testing.

    0 讨论(0)
  • 2020-12-18 18:56

    Always optimize for ease of understanding. This is a cardinal rule of programming, as far as I am concerned. You should not micro-optimize, or even optimize at all until you know that you need to do so, and where you need to do so. It's a very rare case that squeezing every ounce of performance out is more important than maintainability and it's even rarer that you're so awesome that you know where to optimize as you initially write the code.

    Furthermore, things like this get automatically optimized out in any decent language.

    tl;dr don't bother

    0 讨论(0)
  • 2020-12-18 18:56

    Knowing which of these two specific cases is faster is a level of detail that is seldom (if ever) required in a high-level language. Perhaps you might require to know it if your compiler is piss poor at optimizations. However, if your compiler is that bad, you would probably be better off overall in getting a better one if possible.

    If you are programming in assembly, it is more likely that you knowledge of the two cases would be better. Others have already given the assembly breakdown with respect to branch statements, so I will not bother to duplicate that part of the response. However, one item that has been omitted in my opinion is that of the comparison.

    It is conceivable that a processor may change the status flags upon loading 'var'. If so, then if 'var' is 0, then the zero-flag may be set when the variable is loaded into a register. With such a processor, no explicit comparison against FALSE would be required. The equivalent assembly pseudo-code would be ...

    load 'var' into register
    branch if zero or if not zero as appropriate
    

    Using this same processor, if you were to test it against TRUE, the assembly pseudo-code would be ...

    load 'var' into register
    compare that register to TRUE (a specific value)
    branch if equal or if not equal as appropriate
    

    In practice, do any processors behave like this? I don't know--others will be more knowledgeable than I. I do know of some that don't behave in this fashion, but I do not know about all.

    Assuming that some processors do behave as in the scenario described above, what can we learn? IF (and that is a big IF) you are going to worry about this, avoid testing booleans against explicit values ...

    if (var == TRUE)
    if (var != FALSE)
    

    and use one of the following for testing boolean types ...

    if (var)
    if (!var)
    
    0 讨论(0)
  • 2020-12-18 18:57

    A rule of thumb that usually works is "If you know they do the same thing, then the compiler knows too".

    If the compiler knows that the two forms yield the same result, then it will pick the fastest one.

    Hence, assume that they are equally fast, until your profiler tells you otherwise.

    0 讨论(0)
  • 2020-12-18 19:01

    First off: the only way to answer performance question is to measure it. Try it yourself and you'll find out.

    As for what the compiler does: I remind you that "if" is just a conditional goto. When you have

    if (x)
       Y();
    else
       Z();
    Q();
    

    the compiler generates that as either:

    evaluate x
    branch to LABEL1 if result was false
    call Y
    branch to LABEL2
    LABEL1:
    call Z
    LABEL2:
    call Q
    

    or

    evaluate !x
    branch to LABEL1 if result was true
    

    depending on whether it is easier to generate the code to elicit the "normal" or "inverted" result for whatever "x" happens to be. For example, if you have if (a<=b) it might be easier to generate it as (if !(a>b)). Or vice versa; it depends on the details of the exact code being compiled.

    Regardless, I suspect you have bigger fish to fry. If you care about performance, use a profiler and find the slowest thing and then fix that. It makes no sense whatsoever to be worried about nanosecond optimizations when you probably are wasting entire milliseconds somewhere else in your program.

    0 讨论(0)
提交回复
热议问题