问题
I want to declare an integer parameter based on its hexadecimal representation. What are the differences between:
INTEGER(kind=int32), PARAMETER :: a = Z'FFFFFFFF'
INTEGER(kind=int32), PARAMETER :: b = int(Z'FFFFFFFF', kind=int32)
INTEGER(kind=int32), PARAMETER :: c = transfer(Z'FFFFFFFF', 1_int32)
(And yes, I know that this is just -1
.)
gfortran
seems to give me an integer overflow error during compile (helpfully telling me that I can ignore that with -fno-range-check
) for the above a
and b
, but not for c.
I need to make it Fortran 2003 compliant, as this code might be compiled with different compilers elsewhere.
回答1:
The first and third statements are not valid Fortran. A boz literal constant can only appear in a number of limited contexts - the int intrinsic being one of those contexts.
The middle statement sets the value of the named constant to a processor dependent value based on the sequence of bits specified by the boz-literal-constant. The value is processor dependent because the most significant bit in the resulting value is one.
Elaborating, using the Fortran 2008 rules (Fortran 2003 was different, as Vladimir notes):
- The boz literal constant specifies a sequence of 32 one (or on/.TRUE./whatever) bits.
- INTEGER(INT32) specifies an integer with a STORAGE_SIZE of 32 bits, which is presumably greater than or equal to the BIT_SIZE of an object of that type (storage bits and "value bits" may be different due to things like alignment requirements).
- If necessary, the INT intrinsic truncates the bit sequence to the relevant bit size.
- If the left most bit of that truncated sequence was zero, the value of the INT intrinsic is as given by something like
SUM([b(i) * (i-1)**2, i = 1, SIZE(b)])
where b is an array representing the bit sequence, with the rightmost bit being in b(1). - If the right most bit of that truncated sequence is one, as in the example, the standard says the result is processor dependent. That's to accommodate the typical practice of using the most significant bit in the internal representation of a value to represent the sign. With the very common two's complement representation of integers, you'll get a value of -1.
- If the left most bit of that truncated sequence was zero, the value of the INT intrinsic is as given by something like
Under Fortran 2003 the sequence of bits is interpreted as a positive number using the largest integer representation available on the processor. The resulting value will be out of the range of an INTEGER(INT32) object, making the code non-conforming.
来源:https://stackoverflow.com/questions/47192266/hexadecimal-constants