Should I use 'long' instead of 'int' on 64-bits in langs with fixed type size (like Java, C#)

耗尽温柔 提交于 2019-12-18 07:37:48

问题


In 10, or even 5 years there will be no [Edit2: server or desktop] 32-bit CPUs.

So, are there any advantages in using int (32bit) over long (64bit) ?
And are there any disadvantages in using int ?


Edit:

  1. By 10 or 5 years I meant on vast majority of places where those langs are used

  2. I meant which type to use by default. This days I won't even bother to think if I should use short as cycle counter, just for(int i.... The same way long counters already win

  3. registers are already 64-bit, there is already no gain in 32 bit types. And I think some loss in 8 bit types (you have to operate on more bits then you're using)


回答1:


If you're on a 64-bit processor, and you've compiled your code for 64-bit, then at least some of the time, long is likely to be more efficient because it matches the register size. But whether that will really impact your program much is debatable. Also, if you're using long all over the place, you're generally going to use more memory - both on the stack and on the heap - which could negatively impact performance. There are too many variables to know for sure how well your program will perform using long by default instead of int. There are reasons why it could be faster and reasons why it could be slower. It could be a total wash.

The typical thing to do is to just use int if you don't care about the size of the integer. If you need a 64-bit integer, then you use long. If you're trying to use less memory and int is far more than you need, then you use byte or short.

x86_64 CPUs are going to be designed to be efficient at processing 32-bit programs and so it's not like using int is going to seriously degrade performance. Some things will be faster due to better alignment when you use 64-bit integers on a 64-bit CPU, but other things will be slower due to the increased memory requirements. And there are probably a variety of other factors involved which could definitely affect performance in either direction.

If you really want to know which is going to do better for your particular application in your particular environment, you're going to need to profile it. This is not a case where there is a clear advantage of one over the other.

Personally, I would advise that you follow the typical route of using int when you don't care about the size of the integer and to use the other types when you do.




回答2:


32-bit is still a completely valid data type; just like we have 16-bit and bytes still around. We didn't throw out 16-bit or 8-bit numbers when we moved to 32-bit processors. A 32-bit number is half the size of a 64-bit integer in terms of storage. If I were modeling a database, and I knew the value couldn't go higher than what a 32-bit integer could store; I would use a 32-bit integer for storage purposes. I'd do the same thing with a 16-bit number as well. A 64-bit number takes more space in memory as well; albeit not anything significant given today's personal laptops can ship with 8 GB of memory.

There is no disadvantage of int other than it's a smaller data type. It's like asking, "Where should I store my sugar? In a sugar bowl, or a silo?" Well, that depends on entirely how much sugar you have.

Processor architecture shouldn't have much to do with what size data type you use. Use what fits. When we have 512-bit processors, we'll still have bytes.

EDIT:

To address some comments / edits..

  1. I'm not sure about "There will be no 32-bit desktop CPUs". ARM is currently 32-bit; and has declared little interest in 64-bit; for now. That doesn't fit too well with "Desktop" in your description; but I also think in 5-10 years the landscape of the type of devices we are writing software will drastically change as well. Tablets can't be ignored; people will want C# and Java apps to run on them, considering Microsoft officially ported Windows 8 to ARM.

  2. If you want to start using long; go ahead. There is no reason not to. If we are only looking at the CPU (ignoring storage size), and making assumptions we are on an x86-64 architecture, then it doesn't make much difference.

  3. Assuming that we are sticking with the x86 architecture; that's true as well. You may end up with a slightly larger stack; depending on whatever framework you are using.




回答3:


Sorry for the C++ answer.

If the size of the type matters use a sized type:

  • uint8_t
  • int32_t
  • int64_t
  • etc

If the size doesn't matter use an expressive type:

  • size_t
  • ptrdiff_t
  • ssize_t
  • etc

I know that D has sized types and size_t. I'm not sure about Java or C#.



来源:https://stackoverflow.com/questions/6825023/should-i-use-long-instead-of-int-on-64-bits-in-langs-with-fixed-type-size-l

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!