I mean what do I get from using async for
. Here is the code I write with async for
, AIter(10)
could be replaced with get_range()
But it is hard for me to understand what I got by use
async for
here instead of simplefor
.
The underlying misunderstanding is expecting async for to automatically parallelize the iteration. It doesn't do that, it simply allows sequential iteration over an async source. For example, you can use async for
to iterate over lines coming from a TCP stream, messages from a websocket, or database records from an async DB driver.
You could do none of the above with an ordinary for
, at least not without blocking the event loop, because for
calls __next__ as a blocking function and doesn't await its result. You cannot compensate by manually awaiting each element because for
expects __next__
to signal the end of iteration by raising an exception - and if __next__
is a coroutine, the exception won't be visible before awaiting it. This is why async for
was introduced, not just in Python, but also in other languages with async/await and generalized for
.
If you want to run the iterations in parallel, you need to start them as parallel coroutines and use asyncio.as_completed or equivalent to retrieve their results as they come:
async def x(i):
print(f"start {i}")
await asyncio.sleep(1)
print(f"end {i}")
return i
# run x(0)..x(10) in parallel and process results as they arrive
for f in asyncio.as_completed([x(i) for i in range(10)]):
result = await f
# ... do something with the result ...
If you don't care about reacting to results immediately as they arrive, but you need them all, you can make it even simpler by using asyncio.gather
:
# run x(0)..x(10) in parallel and process results when all are done
results = await asyncio.gather(*[x(i) for i in range(10)])