Should I use int or Int32

前端 未结 30 2183
野性不改
野性不改 2020-11-22 11:52

In C#, int and Int32 are the same thing, but I\'ve read a number of times that int is preferred over Int32 with no reason

相关标签:
30条回答
  • 2020-11-22 12:28

    int and Int32 is the same. int is an alias for Int32.

    0 讨论(0)
  • 2020-11-22 12:28

    int is an alias for System.Int32, as defined in this table: Built-In Types Table (C# Reference)

    0 讨论(0)
  • 2020-11-22 12:29

    In my experience it's been a convention thing. I'm not aware of any technical reason to use int over Int32, but it's:

    1. Quicker to type.
    2. More familiar to the typical C# developer.
    3. A different color in the default visual studio syntax highlighting.

    I'm especially fond of that last one. :)

    0 讨论(0)
  • 2020-11-22 12:29

    It doesn't matter. int is the language keyword and Int32 its actual system type.

    See also my answer here to a related question.

    0 讨论(0)
  • 2020-11-22 12:31

    Byte size for types is not too interesting when you only have to deal with a single language (and for code which you don't have to remind yourself about math overflows). The part that becomes interesting is when you bridge between one language to another, C# to COM object, etc., or you're doing some bit-shifting or masking and you need to remind yourself (and your code-review co-wokers) of the size of the data.

    In practice, I usually use Int32 just to remind myself what size they are because I do write managed C++ (to bridge to C# for example) as well as unmanaged/native C++.

    Long as you probably know, in C# is 64-bits, but in native C++, it ends up as 32-bits, or char is unicode/16-bits while in C++ it is 8-bits. But how do we know this? The answer is, because we've looked it up in the manual and it said so.

    With time and experiences, you will start to be more type-conscientious when you do write codes to bridge between C# and other languages (some readers here are thinking "why would you?"), but IMHO I believe it is a better practice because I cannot remember what I've coded last week (or I don't have to specify in my API document that "this parameter is 32-bits integer").

    In F# (although I've never used it), they define int, int32, and nativeint. The same question should rise, "which one do I use?". As others has mentioned, in most cases, it should not matter (should be transparent). But I for one would choose int32 and uint32 just to remove the ambiguities.

    I guess it would just depend on what applications you are coding, who's using it, what coding practices you and your team follows, etc. to justify when to use Int32.

    Addendum: Incidentally, since I've answered this question few years ago, I've started using both F# and Rust. F#, it's all about type-inferences, and bridging/InterOp'ing between C# and F#, the native types matches, so no concern; I've rarely had to explicitly define types in F# (it's almost a sin if you don't use type-inferences). In Rust, they completely have removed such ambiguities and you'd have to use i32 vs u32; all in all, reducing ambiguities helps reduce bugs.

    0 讨论(0)
  • 2020-11-22 12:31

    According to the Immediate Window in Visual Studio 2012 Int32 is int, Int64 is long. Here is the output:

    sizeof(int)
    4
    sizeof(Int32)
    4
    sizeof(Int64)
    8
    Int32
    int
        base {System.ValueType}: System.ValueType
        MaxValue: 2147483647
        MinValue: -2147483648
    Int64
    long
        base {System.ValueType}: System.ValueType
        MaxValue: 9223372036854775807
        MinValue: -9223372036854775808
    int
    int
        base {System.ValueType}: System.ValueType
        MaxValue: 2147483647
        MinValue: -2147483648
    
    0 讨论(0)
提交回复
热议问题