I\'m working on a Python library that interfaces with a web service API. Like many web services I\'ve encountered, this one requests limiting the rate of requests. I would l
This works out better with a queue and a dispatcher.
You split your processing into two sides: source and dispatch. These can be separate threads (or separate processes if that's easier).
The Source side creates and enqueues requests at whatever rate makes them happy.
The Dispatch side does this.
Get the request start time, s.
Dequeues a request, process the request through the remote service.
Get the current time, t. Sleep for rate - (t - s) seconds.
If you want to run the Source side connected directly to the remote service, you can do that, and bypass rate limiting. This is good for internal testing with a mock version of the remote service.
The hard part about this is creating some representation for each request that you can enqueue. Since the Python Queue will handle almost anything, you don't have to do much.
If you're using multi-processing, you'll have to pickle your objects to put them into a pipe.