Does the size of an int depend on the compiler and/or processor?

前端 未结 10 1315
借酒劲吻你
借酒劲吻你 2020-11-22 11:58

Would the size of an integer depend upon the compiler, OS and processor?

相关标签:
10条回答
  • 2020-11-22 12:32

    Data Types Size depends on Processor, because compiler wants to make CPU easier accessible the next byte. for eg: if processor is 32bit, compiler may not choose int size as 2 bytes[which it supposed to choose 4 bytes] because accessing another 2 bytes of that int(4bytes) will take additional CPU cycle which is waste. If compiler chooses int as 4 bytes CPU can access full 4 bytes in one shot which speeds your application.

    Thanks

    0 讨论(0)
  • 2020-11-22 12:33

    The answer to this question depends on how far from practical considerations we are willing to get.

    Ultimately, in theory, everything in C and C++ depends on the compiler and only on the compiler. Hardware/OS is of no importance at all. The compiler is free to implement a hardware abstraction layer of any thickness and emulate absolutely anything. There's nothing to prevent a C or C++ implementation from implementing the int type of any size and with any representation, as long as it is large enough to meet the minimum requirements specified in the language standard. Practical examples of such level of abstraction are readily available, e.g. programming languages based on "virtual machine" platform, like Java.

    However, C and C++ are intended to be highly efficient languages. In order to achieve maximum efficiency a C or C++ implementation has to take into account certain considerations derived from the underlying hardware. For that reason it makes a lot of sense to make sure that each basic type is based on some representation directly (or almost directly) supported by the hardware. In that sense, the size of basic types do depend on the hardware.

    In other words, a specific C or C++ implementation for a 64-bit hardware/OS platform is absolutely free to implement int as a 71-bit 1's-complement signed integral type that occupies 128 bits of memory, using the other 57 bits as padding bits that are always required to store the birthdate of the compiler author's girlfriend. This implementation will even have certain practical value: it can be used to perform run-time tests of the portability of C/C++ programs. But that's where the practical usefulness of that implementation would end. Don't expect to see something like that in a "normal" C/C++ compiler.

    0 讨论(0)
  • 2020-11-22 12:37

    http://www.agner.org/optimize/calling_conventions.pdf

    "3 Data representation" contains good overview of what compilers do with integral types.

    0 讨论(0)
  • 2020-11-22 12:44

    Yes, it would. Did they mean "which would it depend on: the compiler or the processor"? In that case the answer is basically "both." Normally, int won't be bigger than a processor register (unless that's smaller than 16 bits), but it could be smaller (e.g. a 32-bit compiler running on a 64-bit processor). Generally, however, you'll need a 64-bit processor to run code with a 64-bit int.

    0 讨论(0)
  • 2020-11-22 12:44

    Yes , I found that size of int in turbo C was 2 bytes where as in MSVC compiler it was 4 bytes.

    Basically the size of int is the size of the processor registers.

    0 讨论(0)
  • 2020-11-22 12:51

    Based on some recent research I have done studying up for firmware interviews:

    The most significant impact of the processors bit architecture ie, 8bit, 16bit, 32bit, 64bit is how you need to most efficiently store each byte of information in order to best compute variables in the minimum number of cycles.

    The bit size of your processor tells you what the natural word length the CPU is capable of handling in one cycle. A 32bit machine needs 2 cycles to handle a 64bit double if it is aligned properly in memory. Most personal computers were and still are 32bit hence the most likely reason for the C compiler typical affinity for 32bit integers with options for larger floating point numbers and long long ints.

    Clearly you can compute larger variable sizes so in that sense the CPU's bit architecture determines how it will have to store larger and smaller variables in order to achieve best possible efficiency of processing but it is in no way a limiting factor in the definitions of byte sizes for ints or chars, that is part of compilers and what is dictated by convention or standards.

    I found this site very helpful, http://www.geeksforgeeks.org/archives/9705, for explaining how the CPU's natural word length effects how it will chose to store and handle larger and smaller variable types, especially with regards to bit packing into structs. You have to be very cognizant of how you chose to assign variables because larger variables need to be aligned in memory so they take the fewest number of cycles when divided by the CPU's word length. This will add a lot of potentially unnecessary buffer/empty space to things like structs if you poorly order the assignment of your variables.

    0 讨论(0)
提交回复
热议问题