Advantage of switch over if-else statement

后端 未结 22 2148
梦谈多话
梦谈多话 2020-11-22 11:08

What\'s the best practice for using a switch statement vs using an if statement for 30 unsigned enumerations where about 10 have an ex

相关标签:
22条回答
  • 2020-11-22 11:50

    Im not the person to tell you about speed and memory usage, but looking at a switch statment is a hell of a lot easier to understand then a large if statement (especially 2-3 months down the line)

    0 讨论(0)
  • 2020-11-22 11:51

    I would pick the if statement for the sake of clarity and convention, although I'm sure that some would disagree. After all, you are wanting to do something if some condition is true! Having a switch with one action seems a little... unneccesary.

    0 讨论(0)
  • 2020-11-22 11:53

    Use switch.

    In the worst case the compiler will generate the same code as a if-else chain, so you don't lose anything. If in doubt put the most common cases first into the switch statement.

    In the best case the optimizer may find a better way to generate the code. Common things a compiler does is to build a binary decision tree (saves compares and jumps in the average case) or simply build a jump-table (works without compares at all).

    0 讨论(0)
  • 2020-11-22 11:53

    Code for readability. If you want to know what performs better, use a profiler, as optimizations and compilers vary, and performance issues are rarely where people think they are.

    0 讨论(0)
  • 2020-11-22 11:56

    Use switch, it is what it's for and what programmers expect.

    I would put the redundant case labels in though - just to make people feel comfortable, I was trying to remember when / what the rules are for leaving them out.
    You don't want the next programmer working on it to have to do any unnecessary thinking about language details (it might be you in a few months time!)

    0 讨论(0)
  • 2020-11-22 11:57

    Compilers are really good at optimizing switch. Recent gcc is also good at optimizing a bunch of conditions in an if.

    I made some test cases on godbolt.

    When the case values are grouped close together, gcc, clang, and icc are all smart enough to use a bitmap to check if a value is one of the special ones.

    e.g. gcc 5.2 -O3 compiles the switch to (and the if something very similar):

    errhandler_switch(errtype):  # gcc 5.2 -O3
        cmpl    $32, %edi
        ja  .L5
        movabsq $4301325442, %rax   # highest set bit is bit 32 (the 33rd bit)
        btq %rdi, %rax
        jc  .L10
    .L5:
        rep ret
    .L10:
        jmp fire_special_event()
    

    Notice that the bitmap is immediate data, so there's no potential data-cache miss accessing it, or a jump table.

    gcc 4.9.2 -O3 compiles the switch to a bitmap, but does the 1U<<errNumber with mov/shift. It compiles the if version to series of branches.

    errhandler_switch(errtype):  # gcc 4.9.2 -O3
        leal    -1(%rdi), %ecx
        cmpl    $31, %ecx    # cmpl $32, %edi  wouldn't have to wait an extra cycle for lea's output.
                  # However, register read ports are limited on pre-SnB Intel
        ja  .L5
        movl    $1, %eax
        salq    %cl, %rax   # with -march=haswell, it will use BMI's shlx to avoid moving the shift count into ecx
        testl   $2150662721, %eax
        jne .L10
    .L5:
        rep ret
    .L10:
        jmp fire_special_event()
    

    Note how it subtracts 1 from errNumber (with lea to combine that operation with a move). That lets it fit the bitmap into a 32bit immediate, avoiding the 64bit-immediate movabsq which takes more instruction bytes.

    A shorter (in machine code) sequence would be:

        cmpl    $32, %edi
        ja  .L5
        mov     $2150662721, %eax
        dec     %edi   # movabsq and btq is fewer instructions / fewer Intel uops, but this saves several bytes
        bt     %edi, %eax
        jc  fire_special_event
    .L5:
        ret
    

    (The failure to use jc fire_special_event is omnipresent, and is a compiler bug.)

    rep ret is used in branch targets, and following conditional branches, for the benefit of old AMD K8 and K10 (pre-Bulldozer): What does `rep ret` mean?. Without it, branch prediction doesn't work as well on those obsolete CPUs.

    bt (bit test) with a register arg is fast. It combines the work of left-shifting a 1 by errNumber bits and doing a test, but is still 1 cycle latency and only a single Intel uop. It's slow with a memory arg because of its way-too-CISC semantics: with a memory operand for the "bit string", the address of the byte to be tested is computed based on the other arg (divided by 8), and isn't limited to the 1, 2, 4, or 8byte chunk pointed to by the memory operand.

    From Agner Fog's instruction tables, a variable-count shift instruction is slower than a bt on recent Intel (2 uops instead of 1, and shift doesn't do everything else that's needed).

    0 讨论(0)
提交回复
热议问题