I have been using numpy
for quite a while but I stumbled upon one thing that I didn\'t understand fully:
a = np.ones(20)
b = np.zeros(10)
print
The short answer is that you should forget about relying on id
to try and gain deep insight into the workings of python. Its output is affected by cpython implementation details, peephole optimizations and memory reuse. More often than not id
is a red herring. This is especially true with numpy.
In your specific case only a
and b
exist as python objects. When you take an element, a[0]
, you instantiate a new python object, a scalar of type numpy.float64
(or maybe numpy.float32
depending on your system). These are new python objects and are thus given a new id
, unless the interpreter realizes that you're trying to use this object twice (this is probably what's happening in your middle example, although I do find it surprising that two numpy.float64
objects with different values are given the same id
. But the weird magic goes away if you assign a[0]
and b[0]
to proper names first, so this is probably due to some optimization). It could also happen that memory addresses get reused by the interpreter, giving you id
s that have appeared before.
Just to see how pointless id
is with numpy, even trivial views are new python objects with new id
s, even though for all intents and purposes they are as good as the original:
>>> arr = np.arange(3)
>>> id(arr)
140649669302992
>>> id(arr[...])
140649669667056
And here's an example for id
reuse in an interactive shell:
>>> id(np.arange(3))
139775926634896
>>> id(np.arange(3))
139775926672480
>>> id(np.arange(3))
139775926634896
Surely there's no such thing as int interning for numpy arrays, so the above is only due to the interpreter reusing id
s. The fact that id
returns a memory address is again just a cpython implementation detail. Forget about id
.
The only thing you might want to use with numpy is numpy.may_share_memory and numpy.shares_memory.