Is the following numpy behavior intentional or is it a bug?
from numpy import *
a = arange(5)
a = a+2.3
print \'a = \', a
# Output: a = 2.3, 3.3, 4.3, 5.3,
That's intentional.
The +=
operator preserves the type of the array. In other words, an array of integers remains an array of integers.
This enables NumPy to perform the +=
operation using existing array storage. On the other hand, a=a+b
creates a brand new array for the sum, and rebinds a
to point to this new array; this increases the amount of storage used for the operation.
To quote the documentation:
Warning: In place operations will perform the calculation using the precision decided by the data type of the two operands, but will silently downcast the result (if necessary) so it can fit back into the array. Therefore, for mixed precision calculations,
A {op}= B
can be different thanA = A {op} B
. For example, supposea = ones((3,3))
. Then,a += 3j
is different thana = a + 3j
: while they both perform the same computation,a += 3
casts the result to fit back ina
, whereasa = a + 3j
re-binds the namea
to the result.
Finally, if you're wondering why a
was an integer array in the first place, consider the following:
In [3]: np.arange(5).dtype
Out[3]: dtype('int64')
In [4]: np.arange(5.0).dtype
Out[4]: dtype('float64')
@aix is completely right. I just wanted to point out this is not unique to numpy. For example:
>>> a = []
>>> b = a
>>> a += [1]
>>> print a
[1]
>>> print b
[1]
>>> a = a + [2]
>>> print a
[1, 2]
>>> print b
[1]
As you can see +=
modifies the list and +
creates a new list. This hold for numpy as well. +
creates a new array so it can be any data type. +=
modifies the array inplace and it's not practical, and imo desirable, for numpy to change the data type of an array when array content is modified.