int128

Bug with __int128_t in Clang?

左心房为你撑大大i 提交于 2019-12-01 03:57:49
This little code compiles with both GCC and Clang, but gives different results: #include <stdio.h> int main(){ __int128_t test=10; while(test>0){ int myTest=(int)test; printf("? %d\n", myTest); test--; } } With GCC this counts from 10 down to 1, the intended behaviour, while for Clang it keeps on counting into negative numbers. With Clang, if I replace test-- with test-=1 then it gives the expected behaviour as well. __int128_t is a GCC extension, so the above results only apply to non-standard C, so maybe __int128_t is "use at your own risk" in Clang. Is this a bug in Clang, or did I make

Is there hardware support for 128bit integers in modern processors?

六眼飞鱼酱① 提交于 2019-11-29 09:24:04
Do we still need to emulate 128bit integers in software, or is there hardware support for them in your average desktop processor these days? Z boson The x86-64 instruction set can do 64-bit*64-bit to 128-bit using one instruction ( mul for unsigned imul for signed each with one operand) so I would argue that to some degree that the x86 instruction set does include some support for 128-bit integers. If your instruction set does not have an instruction to do 64-bit*64-bit to 128-bit then you need several instructions to emulate this . This is why 128-bit * 128-bit to lower 128-bit operations can

How to print __int128 in g++?

一个人想着一个人 提交于 2019-11-28 21:05:42
I am using the GCC built-in type __int128 for a few things in my C++ program, nothing really significant, at least not enough to justify to use BigInt library only for that and, yet, enough to prevent to remove it totally. My problem comes when I run into the printing parts my classes, here is a minimal example: #include <iostream> int main() { __int128 t = 1234567890; std::cout << t << std::endl; return t; } Commenting out the std::cout line will make this code to compile nicely with g++ , but having it will cause the following error message: int128.c: In function ‘int main()’: int128.c:7:13:

Is __int128_t arithmetic emulated by GCC, even with SSE?

廉价感情. 提交于 2019-11-28 12:13:29
I've heard that the 128-bit integer data-types like __int128_t provided by GCC are emulated and therefore slow. However, I understand that the various SSE instruction sets (SSE, SSE2, ..., AVX) introduced at least some instructions for 128-bit registers. I don't know very much about SSE or assembly / machine code, so I was wondering if someone could explain to me whether arithmetic with __int128_t is emulated or not using modern versions of GCC. The reason I'm asking this is because I'm wondering if it makes sense to expect big differences in __int128_t performance between different versions

Int128 in .Net?

非 Y 不嫁゛ 提交于 2019-11-27 20:21:08
I need to do some large integer math. Are there any classes or structs out there that represent a 128-bit integer and implement all of the usual operators? BTW, I realize that decimal can be used to represent a 96-bit int. Larsenal It's here in System.Numerics . "The BigInteger type is an immutable type that represents an arbitrarily large integer whose value in theory has no upper or lower bounds." var i = System.Numerics.BigInteger.Parse("10000000000000000000000000000000"); While BigInteger is the best solution for most applications, if you have performance critical numerical computations,

How to print __int128 in g++?

我与影子孤独终老i 提交于 2019-11-27 12:43:41
问题 I am using the GCC built-in type __int128 for a few things in my C++ program, nothing really significant, at least not enough to justify to use BigInt library only for that and, yet, enough to prevent to remove it totally. My problem comes when I run into the printing parts my classes, here is a minimal example: #include <iostream> int main() { __int128 t = 1234567890; std::cout << t << std::endl; return t; } Commenting out the std::cout line will make this code to compile nicely with g++ ,

How to enable __int128 on Visual Studio?

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-27 04:49:12
When I type __int128 in a C++ project in Visual Studio, the editor changes color of __int128 to blue (like keyword). But when I compile the source, the folowing error appears: error C4235: nonstandard extension used : '__int128' keyword not supported on this architecture How can I enable __int128 on Visual Studio? MSDN doesn't list it as being available, and this recent response agrees, so officially, no, there is no type called __int128 and it can't be enabled. Additionally, never trust the syntax hilighter; it is user editable, and thus likely to either have bogus or 'future' types in it.

Int128 in .Net?

戏子无情 提交于 2019-11-26 22:56:35
问题 I need to do some large integer math. Are there any classes or structs out there that represent a 128-bit integer and implement all of the usual operators? BTW, I realize that decimal can be used to represent a 96-bit int. 回答1: It's here in System.Numerics . "The BigInteger type is an immutable type that represents an arbitrarily large integer whose value in theory has no upper or lower bounds." var i = System.Numerics.BigInteger.Parse("10000000000000000000000000000000"); 回答2: While