I\'m making an application that analyses one or more series of data using several different algorithms (agents). I came to the idea that each of these agents could be implem
I will typically be making hundreds of thousands, if not millions, of iterations in which I invoke the external "agents"
The performance drop will be noticeable, perhaps painful. If you can put the data into arrays and process it in batches using NumPy, it should be much faster.
NumPy makes it super easy to do any kind of arithmetic a million times in a row. For example, squaring every element of an array is like this:
>>> x = numpy.array([1, 2, 3, 4, 5, 6, 7])
>>> x**2
array([1, 4, 9, 16, 25, 36, 49])
Super easy, and the tight inner loop here is actually implemented in C.
Of course NumPy can also do more advanced number-crunching.