I am running the exact same code on both windows and mac, with python 3.5 64 bit.
On windows, it looks like this:
>>> import numpy as np
&g
Could anyone help explain why
In Python 2 a python "int" was equivalent to a C long. In Python 3 an "int" is an arbitrary precision type but numpy still uses "int" it to represent the C type "long" when creating arrays.
The size of a C long is platform dependent. On windows it is always 32-bit. On unix-like systems it is normally 32 bit on 32 bit systems and 64 bit on 64 bit systems.
or give a solution for the code on windows? Thanks so much!
Choose a data type whose size is not platform dependent. You can find the list at https://docs.scipy.org/doc/numpy/reference/arrays.scalars.html#arrays-scalars-built-in the most sensible choice would probably be np.int64
You'll get that error once your numbers are greater than sys.maxsize
:
>>> p = [sys.maxsize]
>>> preds[0] = p
>>> p = [sys.maxsize+1]
>>> preds[0] = p
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: Python int too large to convert to C long
You can confirm this by checking:
>>> import sys
>>> sys.maxsize
2147483647
To take numbers with larger precision, don't pass an int type which uses a bounded C integer behind the scenes. Use the default float:
>>> preds = np.zeros((1, 3))
You can use dtype=np.int64 instead of dtype=int