I\'ve just done a test with bitfields, and the results are surprising me.
class test1 {
public:
bool test_a:1;
bool test_b:1;
bool test_c:1;
#include <iostream>
using namespace std;
bool ary_bool4[10];
struct MyStruct {
bool a1 :1;
bool a2 :1;
bool a3 :1;
bool a4 :1;
char b1 :2;
char b2 :2;
char b3 :2;
char b4 :6;
char c1;
};
int main() {
cout << "char size:\t" << sizeof(char) << endl;
cout << "short int size:\t" << sizeof(short int) << endl;
cout << "default int size:\t" << sizeof(int) << endl;
cout << "long int size:\t" << sizeof(long int) << endl;
cout << "long long int size:\t" << sizeof(long long int) << endl;
cout << "ary_bool4 size:\t" << sizeof(ary_bool4) << endl;
cout << "MyStruct size:\t" << sizeof(MyStruct) << endl;
// cout << "long long long int size:\t" << sizeof(long long long int) << endl;
return 0;
}
char size: 1
short int size: 2
default int size: 4
long int size: 4
long long int size: 8
ary_bool4 size: 10
MyStruct size: 3
From "Samuel P. Harbison, Guy L. Steele] C A Reference":
The problem:
"Compilers are free to impose constraints on the maximum size of a bit field, and specify certain addressing boundaries that bit field cannot cross."
Manipulations which can be done within standard:
"An unnamed bit field may also be included in a structure to provide padding."
"Specify a length of 0 for unnamed bit field has a special meaning - it indicates that no more bit fields should be packed into the area in which the previous bit field...Area here means some impl. defined storage unit"
Is this what you'd expect, or a compiler bug?
So within C89, C89 with amendment I, C99 - it is not a bug. About C++ I don't know, but I think that the behavior is similar.
Be careful with bitfields as much of its behavior is implementation (compiler) defined:
From C++03, 9.6 Bitfields (pg. 163):
Allocation of bit-fields within a class object is implementation-defined. Alignment of bit-fields is implementation-defined. Bit-fields are packed into some addressable allocation unit. [Note:bit-fields straddle allocation units on some machines and not on others. Bit-fields are assigned right-to-left on some machines, left-to-right on others. ]
That is, it is not a bug in the compiler but rather lack of a standard definition of how it should behave.
Wow, that's surprising. In GCC 4.2.4, the results are 1, 4, and 4, respectively, both in C and C++ modes. Here's the test program I used that works in both C99 and C++.
#ifndef __cplusplus
#include <stdbool.h>
#endif
#include <stdio.h>
struct test1 {
bool test_a:1;
bool test_b:1;
bool test_c:1;
bool test_d:1;
bool test_e:1;
bool test_f:1;
bool test_g:1;
bool test_h:1;
};
struct test2 {
int test_a:1;
int test_b:1;
int test_c:1;
int test_d:1;
int test_e:1;
int test_f:1;
int test_g:1;
int test_h:1;
};
struct test3 {
int test_a:1;
bool test_b:1;
int test_c:1;
bool test_d:1;
int test_e:1;
bool test_f:1;
int test_g:1;
bool test_h:1;
};
int
main()
{
printf("%zu %zu %zu\n", sizeof (struct test1), sizeof (struct test2),
sizeof (struct test3));
return 0;
}
your compiler has arranged all of the members of test3 on integer size boundaries. Once a block has been used for a given type (integer bit-field, or boolean bit-field), the compiler does not allocate any further bit fields of a different type until the next boundary.
I doubt it is a bug. It probably has something to do with the underlying architecture of your system.
edit:
c++ compilers will allocate bit-fields in memory as follows: several consecutive bit-field members of the same type will be allocated sequentially. As soon as a new type needs to be allocated, it will be aligned with the beginning of the next logical memory block. The next logical block will depend on your processor. Some processors can align to 8-bit boundaries, while others can only align to 16-bit boundaries.
In your test3, each member is of a different type than the one before it, so the memory allocation will be 8 * (the minimum logical block size on your system). In your case, the minimum block size is two bytes (16-bit), so the size of test3 is 8*2 = 16.
On a system that can allocate 8-bit blocks, I would expect the size to be 8.
As a general observation, a signed int
of 1 bit doesn't make a lot of sense. Sure, you can probably figure out how to store 0 in it, but then the trouble starts.
One bit must be the sign-bit, even in two's complement, but you only have one bit to play with. So, if you allocate that as the sign-bit, you have no bits left for the actual value. It's true as Steve Jessop points out in a comment that you could probably represent -1 if using two's complement, but I still think that an "integer" datatype that can only represent 0 and -1 is a rather weird thing.
To me, this datatypes makes no (or, given Steve's comment, little) sense.
Use unsigned int small : 1;
to make it unsigned, then you can store the values 0 and 1 in a non-ambiguous manner.