Why isn't there int128_t?

后端 未结 1 1446
孤独总比滥情好
孤独总比滥情好 2020-12-01 15:34

A number of compilers provide 128-bit integer types, but none of the ones I\'ve used provide the typedefs int128_t. Why?

As far as I recall, the standar

相关标签:
1条回答
  • 2020-12-01 16:22

    I'll refer to the C standard; I think the C++ standard inherits the rules for <stdint.h> / <cstdint> from C.

    I know that gcc implements 128-bit signed and unsigned integers, with the names __int128 and unsigned __int128 (__int128 is an implementation-defined keyword) on some platforms.

    Even for an implementation that provides a standard 128-bit type, the standard does not require int128_t or uint128_t to be defined. Quoting section 7.20.1.1 of the N1570 draft of the C standard:

    These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two’s complement representation, it shall define the corresponding typedef names.

    C permits implementations to defined extended integer types whose names are implementation-defined keywords. gcc's __int128 and unsigned __int128 are very similar to extended integer types as defined by the standard -- but gcc doesn't treat them that way. Instead, it treats them as a language extension.

    In particular, if __int128 and unsigned __int128 were extended integer types, then gcc would be required to define intmax_t and uintmax_t as those types (or as some types at least 128 bits wide). It does not do so; instead, intmax_t and uintmax_t are only 64 bits.

    This is, in my opinion, unfortunate, but I don't believe it makes gcc non-conforming. No portable program can depend on the existence of __int128, or on any integer type wider than 64 bits.

    0 讨论(0)
提交回复
热议问题