Is a bit field any more efficient (computationally) than masking bits and extracting the data by hand?

后端 未结 7 1670
庸人自扰
庸人自扰 2021-02-08 10:16

I have a numerous small pieces of data that I want to be able to shove into one larger data type. Let\'s say that, hypothetically, this is a date and time. The obvious method is

相关标签:
7条回答
  • 2021-02-08 10:47

    Depends. If you use bit fields, then you let the compiler worry about how to store the data (bitfields are pretty much completely implementation-defined), which means that:

    • It might use more space than necessary, and
    • Accessing each member will be done efficiently.

    The compiler will typically organize the layout of the struct so that the second assumption holds, at the cost of total size of the struct.

    The compiler will probably insert padding between each member, to ease access to each field.

    On the other hand, if you just store everything in a single unsigned long (or an array of chars), then it's up to you to implement efficient access, but you have a guarantee of the layout. It will take up a fixed size, and there will be no padding. And that means that copying the value around may get less expensive. And it'll be more portable (assuming you use a fixed-size int type instead of just unsigned int).

    0 讨论(0)
  • 2021-02-08 10:48

    In this examle I would use the bit field manually.
    But not because of accesses. But because of the ability to compare two dt's.
    In the end the compiler will always generate better code than you (as the compiler will get better over time and never make mistakes) but this code is simple enough that you will probably write optimum code (but this is the kind of micro optimization you should not be worrying about).

    If your dt is an integer formatted as:

    yyyyyyyyyyyy|mmmm|ffffffffd|hhhhh|mmmmmm
    

    Then you can naturally compare them like this.

    dt t1(getTimeStamp());
    dt t2(getTimeStamp());
    
    if (t1 < t2)
    {    std::cout << "T1 happened before T2\n";
    }
    

    By using a bit field structure the code looks like this:

    dt t1(getTimeStamp());
    dt t2(getTimeStamp());
    
    if (convertToInt(t1) < convertToInt(t2))
    {    std::cout << "T1 happened before T2\n";
    }
    // or
    if ((t1.year < t2.year)
        || ((t1.year == t2.year) && ((t1.month < t2.month)
          || ((t1.month == t2.month) && ((t1.day < t2.day)
            || ((t1.day == t2.day) && (t1.hour  etc.....
    

    Of course you could get the best of both worlds by using a union that has the structure on one side and the int as the alternative. Obviously this will depend exactly on how your compiler works and you will need to test that the objects are getting placed in the correct positions (but this would be perfect place to learn about TDD.

    0 讨论(0)
  • 2021-02-08 10:50

    The compiler generates the same instructions that you would explicitly write to access the bits. So don't expect it to be faster with bitfields.

    In fact, strictly speaking with bitfields you don't control how they are positioned in the word of data (unless your compiler gives you some additional guarantees. I mean that the C99 standard doesn't define any). Doing masks by hand, you can at least place the two most often accessed fields first and last in the series, because in these two positions, it takes one operation instead of two to isolate the field.

    0 讨论(0)
  • 2021-02-08 10:50

    Only if your architecture explicitly has a set of instructions for bit-wise manipulation and access.

    0 讨论(0)
  • 2021-02-08 10:53

    It is entirely platform and compiler dependent. Some processors, especially microcontrollers, have bit addressing instructions or bit addressable memory, and the compiler can use these directly if you use built-in language constructs. If you use bit-masking to operate on bits on such a processor, the compiler will have to be smarter to spot the potential optimisation.

    On most desktop platforms I would suggest that you are sweating the small stuff, but if you need to know, you should test it by profiling or timing the code, or analyse the generated code. Note that you may get very different results depending on compiler optimisation options, and even different compilers.

    0 讨论(0)
  • 2021-02-08 10:54

    The compiler can sometimes combine the access to the bitfields in a non intuitive matter. I once disassembled the code generated (gcc 3.4.6 for sparc) when accessing 1 bit entries that where used in a conditionnal expressions. The compiler fused the access to the bits and made the comparison with integers. I will try to reproduce the idea (not at work and can not access the source code that was involved):

    struct bits {
      int b1:1;
      int b2:1;
      int b3:1;
      ...
    } x;
    
    if(x.b1 && x.b2 && !x.b3)
    ...
    if(x.b2 && !x.b2 && x.b3)
    

    was compiled to something equivalent to (I know the bitorder in my example is the opposite but it is only for the sake of simplification of the example).

    temp = (x & 7);
    if( temp == 6)
    ...
    if( temp == 5)
    

    There's also another point to consider if one wants to use bitfields (they are often more readable than bit-kungfu), if you have some bits to spare, it can be useful to reserve whole bytes for certain fields, thus simplifying access by the processor. An 8 bits field that is aligned can be loaded with a move byte command and doesn't need the masking step.

    0 讨论(0)
提交回复
热议问题