Why does Apple define UInt32 as long or int depending on platform?

隐身守侯 提交于 2019-12-12 14:50:27

问题


I noticed that UInt32 is defined differently based on the platform in MacTypes.h

#if __LP64__
typedef unsigned int                    UInt32;
typedef signed int                      SInt32;
#else
typedef unsigned long                   UInt32;
typedef signed long                     SInt32;
#endif

If unsigned int is always 32 bits on 32 and 64bit machines, why do they bother conditionally checking the platform?


回答1:


My guess is it is some old code which was initially:

typedef unsigned long                   UInt32;
typedef signed long                     SInt32;

and a developper later added the LP64 and he did by adding the lines:

#if __LP64__
typedef unsigned int                    UInt32;
typedef signed int                      SInt32;
#else

to no impact the previous platforms.

Of course, it does not make much sense to do that.




回答2:


The type UInt32 existed before 64-bit support. It has historically been defined as unsigned long. It could have been unsigned int. I don't know why long was chosen over int at that time. The choice would have been largely arbitrary.

Once that choice was made, though, it can't be changed, even though unsigned int would work for both 32- and 64-bit.

The big thing that would break if it were changed would be C++. In C++, the types of arguments are baked into the symbol names in the object files and libraries. long and int are different types, so void foo(long); and void foo(int); are separate functions with separate symbol names. If UInt32 were to change in 32-bit, then you wouldn't be able to link against libraries that were built with the old definition. If the libraries were rebuilt with the new definition, then old compiled code would not be able to load them.




回答3:


A long is guaranteed to be at least 32 bits. An int will be 16 bits on a 16 bit processor. This is discussed here http://en.wikipedia.org/wiki/C_data_types, among other places.




回答4:


The actual size of integer types varies by implementation. The standard only requires size relations between the data types and minimum sizes for each data type. Generally, the sizeof(int) reflects that "natural/native" size of the machine. On a 64-bit machine, int might be defined as 32 or 64 bits, ; 32-bits on 32-bit architecture, 16-bit on 16-bit machines. However, the standard indicates that int type will always be at least 16 bits.




回答5:


unsigned int is not always 32-bit long, it depends on the model

  • In LP64 model, long and pointer are 64-bit types, int is a 32-bit type
  • In ILP64 model, int, long and pointer are 64-bit types

https://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_models

In other models, int can have any other number of bits. The only restriction is that it must have at least 16 bits



来源:https://stackoverflow.com/questions/23503088/why-does-apple-define-uint32-as-long-or-int-depending-on-platform

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!