How does one specify the maximum value representable for an unsigned
integer type?
I would like to know how to initialize min
in the loop below
https://groups.google.com/group/golang-nuts/msg/71c307e4d73024ce?pli=1
The germane part:
Since integer types use two's complement arithmetic, you can infer the min/max constant values for
int
anduint
. For example,const MaxUint = ^uint(0) const MinUint = 0 const MaxInt = int(MaxUint >> 1) const MinInt = -MaxInt - 1
As per @CarelZA's comment:
uint8 : 0 to 255
uint16 : 0 to 65535
uint32 : 0 to 4294967295
uint64 : 0 to 18446744073709551615
int8 : -128 to 127
int16 : -32768 to 32767
int32 : -2147483648 to 2147483647
int64 : -9223372036854775808 to 9223372036854775807
I originally used the code taken from the discussion thread that @nmichaels used in his answer. I now use a slightly different calculation. I've included some comments in case anyone else has the same query as @Arijoon
const (
MinUint uint = 0 // binary: all zeroes
// Perform a bitwise NOT to change every bit from 0 to 1
MaxUint = ^MinUint // binary: all ones
// Shift the binary number to the right (i.e. divide by two)
// to change the high bit to 0
MaxInt = int(MaxUint >> 1) // binary: all ones except high bit
// Perform another bitwise NOT to change the high bit to 1 and
// all other bits to 0
MinInt = ^MaxInt // binary: all zeroes except high bit
)
The last two steps work because of how positive and negative numbers are represented in two's complement arithmetic. The Go language specification section on Numeric types refers the reader to the relevant Wikipedia article. I haven't read that, but I did learn about two's complement from the book Code by Charles Petzold, which is a very accessible intro to the fundamentals of computers and coding.
I put the code above (minus most of the comments) in to a little integer math package.
Quick summary:
import "math/bits"
const (
MaxUint uint = (1 << bits.UintSize) - 1
MaxInt int = (1 << bits.UintSize) / 2 - 1
MinInt int = (1 << bits.UintSize) / -2
)
Background:
As I presume you know, the uint
type is the same size as either uint32
or uint64
, depending on the platform you're on. Usually, one would use the unsized version of these only when there is no risk of coming close to the maximum value, as the version without a size specification can use the "native" type, depending on platform, which tends to be faster.
Note that it tends to be "faster" because using a non-native type sometimes requires additional math and bounds-checking to be performed by the processor, in order to emulate the larger or smaller integer. With that in mind, be aware that the performance of the processor (or compiler's optimised code) is almost always going to be better than adding your own bounds-checking code, so if there is any risk of it coming into play, it may make sense to simply use the fixed-size version, and let the optimised emulation handle any fallout from that.
With that having been said, there are still some situations where it is useful to know what you're working with.
The package "math/bits" contains the size of uint
, in bits. To determine the maximum value, shift 1
by that many bits, minus 1. ie: (1 << bits.UintSize) - 1
Note that when calculating the maximum value of uint
, you'll generally need to put it explicitly into a uint
(or larger) variable, otherwise the compiler may fail, as it will default to attempting to assign that calculation into a signed int
(where, as should be obvious, it would not fit), so:
const MaxUint uint = (1 << bits.UintSize) - 1
That's the direct answer to your question, but there are also a couple of related calculations you may be interested in.
According to the spec, uint
and int
are always the same size.
uint
either 32 or 64 bits
int
same size asuint
So we can also use this constant to determine the maximum value of int
, by taking that same answer and dividing by 2
then subtracting 1
. ie: (1 << bits.UintSize) / 2 - 1
And the minimum value of int
, by shifting 1
by that many bits and dividing the result by -2
. ie: (1 << bits.UintSize) / -2
In summary:
MaxUint: (1 << bits.UintSize) - 1
MaxInt: (1 << bits.UintSize) / 2 - 1
MinInt: (1 << bits.UintSize) / -2
full example (should be the same as below)
package main
import "fmt"
import "math"
import "math/bits"
func main() {
var mi32 int64 = math.MinInt32
var mi64 int64 = math.MinInt64
var i32 uint64 = math.MaxInt32
var ui32 uint64 = math.MaxUint32
var i64 uint64 = math.MaxInt64
var ui64 uint64 = math.MaxUint64
var ui uint64 = (1 << bits.UintSize) - 1
var i uint64 = (1 << bits.UintSize) / 2 - 1
var mi int64 = (1 << bits.UintSize) / -2
fmt.Printf(" MinInt32: %d\n", mi32)
fmt.Printf(" MaxInt32: %d\n", i32)
fmt.Printf("MaxUint32: %d\n", ui32)
fmt.Printf(" MinInt64: %d\n", mi64)
fmt.Printf(" MaxInt64: %d\n", i64)
fmt.Printf("MaxUint64: %d\n", ui64)
fmt.Printf(" MaxUint: %d\n", ui)
fmt.Printf(" MinInt: %d\n", mi)
fmt.Printf(" MaxInt: %d\n", i)
}
Use the constants defined in the math package:
const (
MaxInt8 = 1<<7 - 1
MinInt8 = -1 << 7
MaxInt16 = 1<<15 - 1
MinInt16 = -1 << 15
MaxInt32 = 1<<31 - 1
MinInt32 = -1 << 31
MaxInt64 = 1<<63 - 1
MinInt64 = -1 << 63
MaxUint8 = 1<<8 - 1
MaxUint16 = 1<<16 - 1
MaxUint32 = 1<<32 - 1
MaxUint64 = 1<<64 - 1
)