I\'m writting a Cython wrapper to a C function. I have a pxd file with the following signature:
double contr_hrr(int lena, double xa, double ya, double za, doubl
I think that you cannot do otherwise, but convert it by yourself :
cimport cython
from libc.stdlib cimport malloc, free
...
cdef double *anorms
cdef unsigned int i;
anorms = <double *>malloc(len(anorms2)*cython.sizeof(double))
if anorms is NULL:
raise MemoryError()
for i in xrange(len(anorms2)):
anorms[i] = anorms2[i]
return contr_hrr(len(acoefs),a.origin[0],a.origin[1],a.origin[2],anorms)
If you had been in C++, this would have been different because
The following coercions are available:
Python type => C++ type => Python type
bytes std::string bytes
iterable std::vector list
iterable std::list list
iterable std::set set
iterable (len 2) std::pair tuple (len 2)
If you could switch to C++, you would have a direct translation from List[float]
to vector<double>
:
from libcpp.vector cimport vector
def py_contr_hrr(vector[double] anorms2, ...):
...
return contr_hrr(len(acoefs),a.origin[0],a.origin[1],a.origin[2],anorms2)
And calling directly from Python side :
anorms2 = [12.0, 0.5, ...]
py_contr_hrr(anorms2, ....)
Source : http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html#standard-library
But I don't know if it is an option that you can consider... It depends on your project's constraints, of course.
EDIT : I didn't know about Nikita
's way of doing (which is an elegant one, by the way), and I don't know ether which way is the best suitable concerning performances on big arrays.
cimport array
:
from cpython cimport array
Create an array object from your list. array class constructor will do all the heavy lifting allocating memory and iterating over your list (could be any iterable actually).
cdef array.array anorms2_arr = array.array('d', anorms2)
Pass its data to your function:
return contr_hrr(.., anorms2_arr.data.as_doubles)
array
is a standard Python module. Cython adds some special support on top, like buffer interface and direct access to the underlying memory block via arr.data.as_xxx
. Unfortunately, this support is only documented here.
You can also find some details about array usage in this recent thread.
What I finally did was to make sure that the anorms arrays were maintained as arrays in the python part of the code, and then followed Nikita's recipe to convert them on the fly to doubles using the .data.as_doubles property. If I do this, it appears to have very little overhead compared to doing everything natively in C.
Haven't yet experimented with the numpy approach, for a variety of mundane reasons.