I am curious about the memory cost of map
and slice
, so I wrote a program to compare the sizes. I get the memory size by unsafe.Sizeof(s)
unsafe.SizeOf() and reflect.Type.Size()
only return the size of the passed value without recursively traversing the data structure and adding sizes of pointed values.
The slice is relatively a simple struct: reflect.SliceHeader, and since we know it references a backing array, we can easily compute its size "manually", e.g.:
s := make([]int32, 1000)
fmt.Println("Size of []int32:", unsafe.Sizeof(s))
fmt.Println("Size of [1000]int32:", unsafe.Sizeof([1000]int32{}))
fmt.Println("Real size of s:", unsafe.Sizeof(s)+unsafe.Sizeof([1000]int32{}))
Output (try it on the Go Playground):
Size of []int32: 12
Size of [1000]int32: 4000
Real size of s: 4012
Maps are a lot more complex data structures, I won't go into details, but check out this question+answer: Golang: computing the memory footprint (or byte length) of a map
If you want "real" numbers, you may take advantage of the testing tool of Go, which can also perform memory benchmarking. Pass the -benchmem
argument, and inside the benchmark function allocate only whose memory you want to measure:
func BenchmarkSlice100(b *testing.B) {
for i := 0; i < b.N; i++ { getSlice(100) }
}
func BenchmarkSlice1000(b *testing.B) {
for i := 0; i < b.N; i++ { getSlice(1000) }
}
func BenchmarkSlice10000(b *testing.B) {
for i := 0; i < b.N; i++ { getSlice(10000) }
}
func BenchmarkMap100(b *testing.B) {
for i := 0; i < b.N; i++ { getMap(100) }
}
func BenchmarkMap1000(b *testing.B) {
for i := 0; i < b.N; i++ { getMap(1000) }
}
func BenchmarkMap10000(b *testing.B) {
for i := 0; i < b.N; i++ { getMap(10000) }
}
(Remove the timing and printing calls from getSlice()
and getMap()
of course.)
Running with
go test -bench . -benchmem
Output is:
BenchmarkSlice100-4 3000000 471 ns/op 1792 B/op 1 allocs/op
BenchmarkSlice1000-4 300000 3944 ns/op 16384 B/op 1 allocs/op
BenchmarkSlice10000-4 50000 39293 ns/op 163840 B/op 1 allocs/op
BenchmarkMap100-4 200000 11651 ns/op 2843 B/op 9 allocs/op
BenchmarkMap1000-4 10000 111040 ns/op 41823 B/op 12 allocs/op
BenchmarkMap10000-4 1000 1152011 ns/op 315450 B/op 135 allocs/op
B/op
values tell you how many bytes were allocated per op. allocs/op
tells how many (distinct) memory allocations occurred per op.
On my 64-bit architecture (where the size of int
is 8 bytes) it tells that the size of a slice having 2000 elements is roughly 16 KB (in line with 2000 * 8 bytes). A map with 1000 int-int
pairs required approximately to allocate 42 KB.
This incurs some marshaling overhead but I've found it's the simplest way during runtime to get the size of a value in go. For my needs the marshaling overhead wasn't a big issue so I went this route.
func getRealSizeOf(v interface{}) (int, error) {
b := new(bytes.Buffer)
if err := gob.NewEncoder(b).Encode(v); err != nil {
return 0, err
}
return b.Len(), nil
}
This is the correct way, using unsafe.Sizeof(s)
. It's just that result will remain the same for a given type - integer, string, etc., disregarding the exact value.
Sizeof takes an expression x of any type and returns the size in bytes of a hypothetical variable v as if v was declared via var v = x. The size does not include any memory possibly referenced by x. For instance, if x is a slice, Sizeof returns the size of the slice descriptor, not the size of the memory referenced by the slice.
Reference here.
Update:
You can use marshalling and then compare values representations in bytes with Size()
. It's only matter of converting a data to a byte string.