Why class size increases when int64_t changes to int32_t

前端 未结 4 1035
不知归路
不知归路 2021-02-11 19:31

In my first example I have two bitfields using int64_t. When I compile and get the size of the class I get 8.

class Test
{
    int64_t first : 40;
         


        
4条回答
  •  名媛妹妹
    2021-02-11 20:18

    To add to what others have already said:

    If you want to examine it, you can use a compiler option or external program to output the struct layout.

    Consider this file:

    // test.cpp
    #include 
    
    class Test_1 {
        int64_t first  : 40;
        int64_t second : 24;
    };
    
    class Test_2 {
        int64_t first  : 40;
        int32_t second : 24;
    };
    
    // Dummy instances to force Clang to output layout.
    Test_1 t1;
    Test_2 t2;
    

    If we use a layout output flag, such as Visual Studio's /d1reportSingleClassLayoutX (where X is all or part of the class or struct name) or Clang++'s -Xclang -fdump-record-layouts (where -Xclang tells the compiler to interpret -fdump-record-layouts as a Clang frontend command instead of a GCC frontend command), we can dump the memory layouts of Test_1 and Test_2 to standard output. [Unfortunately, I'm not sure how to do this directly with GCC.]

    If we do so, the compiler will output the following layouts:

    • Visual Studio:
    cl /c /d1reportSingleClassLayoutTest test.cpp
    
    // Output:
    tst.cpp
    class Test_1    size(8):
        +---
     0. | first (bitstart=0,nbits=40)
     0. | second (bitstart=40,nbits=24)
        +---
    
    
    
    class Test_2    size(16):
        +---
     0. | first (bitstart=0,nbits=40)
     8. | second (bitstart=0,nbits=24)
        |  (size=4)
        +---
    
    • Clang:
    clang++ -c -std=c++11 -Xclang -fdump-record-layouts test.cpp
    
    // Output:
    *** Dumping AST Record Layout
       0 | class Test_1
       0 |   int64_t first
       5 |   int64_t second
         | [sizeof=8, dsize=8, align=8
         |  nvsize=8, nvalign=8]
    
    *** Dumping IRgen Record Layout
    Record: CXXRecordDecl 0x344dfa8  line:3:7 referenced class Test_1 definition
    |-CXXRecordDecl 0x344e0c0  col:7 implicit class Test_1
    |-FieldDecl 0x344e1a0  col:10 first 'int64_t':'long'
    | `-IntegerLiteral 0x344e170  'int' 40
    |-FieldDecl 0x344e218  col:10 second 'int64_t':'long'
    | `-IntegerLiteral 0x344e1e8  'int' 24
    |-CXXConstructorDecl 0x3490d88  col:7 implicit used Test_1 'void (void) noexcept' inline
    | `-CompoundStmt 0x34912b0 
    |-CXXConstructorDecl 0x3490ee8  col:7 implicit constexpr Test_1 'void (const class Test_1 &)' inline noexcept-unevaluated 0x3490ee8
    | `-ParmVarDecl 0x3491030  col:7 'const class Test_1 &'
    `-CXXConstructorDecl 0x34910c8  col:7 implicit constexpr Test_1 'void (class Test_1 &&)' inline noexcept-unevaluated 0x34910c8
      `-ParmVarDecl 0x3491210  col:7 'class Test_1 &&'
    
    Layout: 
        
    ]>
    
    *** Dumping AST Record Layout
       0 | class Test_2
       0 |   int64_t first
       5 |   int32_t second
         | [sizeof=8, dsize=8, align=8
         |  nvsize=8, nvalign=8]
    
    *** Dumping IRgen Record Layout
    Record: CXXRecordDecl 0x344e260  line:8:7 referenced class Test_2 definition
    |-CXXRecordDecl 0x344e370  col:7 implicit class Test_2
    |-FieldDecl 0x3490bd0  col:10 first 'int64_t':'long'
    | `-IntegerLiteral 0x344e400  'int' 40
    |-FieldDecl 0x3490c70  col:10 second 'int32_t':'int'
    | `-IntegerLiteral 0x3490c40  'int' 24
    |-CXXConstructorDecl 0x3491438  col:7 implicit used Test_2 'void (void) noexcept' inline
    | `-CompoundStmt 0x34918f8 
    |-CXXConstructorDecl 0x3491568  col:7 implicit constexpr Test_2 'void (const class Test_2 &)' inline noexcept-unevaluated 0x3491568
    | `-ParmVarDecl 0x34916b0  col:7 'const class Test_2 &'
    `-CXXConstructorDecl 0x3491748  col:7 implicit constexpr Test_2 'void (class Test_2 &&)' inline noexcept-unevaluated 0x3491748
      `-ParmVarDecl 0x3491890  col:7 'class Test_2 &&'
    
    Layout: 
        
    ]>
    

    Note that the version of Clang I used to generate this output (the one used by Rextester) appears to default to optimising both bitfields into a single variable, and I'm unsure how to disable this behaviour.

提交回复
热议问题