I have some python code that has many classes. I used cProfile
to find that the total time to run the program is 68 seconds. I found that the following function in
Depending on how often you add new elements to self.people
or change person.utility
, you could consider sorting self.people
by the utility
field.
Then you could use a bisect
function to find the lower index i_pivot
where the person[i_pivot].utility >= price
condition is met. This would have a lower complexity ( O(log N) ) than your exhaustive loop ( O(N) )
With this information, you could then update your people
list if needed :
Do you really need to update the utility
field each time ? In the sorted case, you could easily deduce this value while iterating : for example, considering your list sorted in incresing order, utility = (index >= i_pivot)
Same question with customers
and nonCustomers
lists. Why do you need them? They could be replaced by slices of the original sorted list : for example, customers = self.people[0:i_pivot]
All this would allow you to reduce the complexity of your algorithm, and use more built-in (fast) Python functions, this could speedup your implementation.