In C#, why is “int” an alias for System.Int32?

前端 未结 2 1778
春和景丽
春和景丽 2021-01-01 09:11

Since C# supports Int8, Int16, Int32 and Int64, why did the designers of the language choose to define int a

2条回答
  •  小蘑菇
    小蘑菇 (楼主)
    2021-01-01 09:37

    Many programmers have the tendency to write code for the platform they use. This includes assumptions about the size of a type. There are many C programs around which will fail if the size of an int would be changed to 16 or 64 bit because they were written under the assumption that an int is 32 bit. The choice for C# avoids that problem by simply defining it as that. If you define int as variable depending on the platform you by back into that same problem. Although you could argue that it's the programmers fault of making wrong assumptions it makes the language a bit more robust (IMO). And for desktop platforms a 32 bit int is probably the most common occurence. Besides it makes porting native C code to C# a bit easier.

    Edit: I think you write code which makes (implicit) assumptions about the sizer of a type more often than you think. Basically anything which involves serialization (like .NET remoting, WCF, serializing data to disk, etc.) will get you in trouble if you allow variables sizes for int unless the programmer takes care of it by using the specific sized type like int32. And then you end up with "we'll use int32 always anyway just in case" and you have gained nothing.

提交回复
热议问题