Whenever I read about async
-await
, the use case example is always one where there\'s a UI that you don\'t want to freeze. Either a
Most often you don't gain in direct performance (task you are performing happens faster and/or in less memory) as in scalability; using less threads to perform the same number of simultaneous tasks means the number of simultaneous tasks you can do is higher.
For the most part therefore, you don't find a given operation improving in performance, but can find heavy use has improved performance.
If an operation requires parallel tasks that involve something truly async (multiple async I/O) then that scalability can though benefit that single operation. Because the degree of blocking happening in threads is reduced, this happens even if you have only one core, because the machine divides its time only between those tasks that are not currently waiting.
This differs from parallel CPU-bound operations which (whether done using tasks or otherwise) will generally only scale up to the number of cores available. (Hyper-threaded cores behave like 2 or more cores in some regards and not in others).
In this interview, Eric Lippert compared async await with a cook making breakfast. It helped me a lot to understand the benefits of async-await. Search somewhere in the middle for 'async-await'
Suppose a cook has to make breakfast. He has to toast some bread and boil some eggs, maybe make some tea as well?
Method 1: Synchronous. Performed by one thread. You start toasting the bread. Wait until the bread is toasted. Remove the bread. Start boiling water, wait until the water boils and insert your egg. Wait until the egg is ready and remove the egg. Start boiling water for the tea. Wait until the water is boiled and make the tea.
Youl see all the waits. while the thread is waiting it could do other things.
Method 2: Async-await, still one thread You start toasting the bread. While the bread is being toasted you start boiling water for the eggs and also for the tea. Then you start waiting. When any of the three tasks is finished you do the second part of the task, depending on which task finished first. So if the water for the eggs boils first, you cook the eggs, and again wait for any of the tasks to finish.
In this description only one person (you) is doing all the stuff. Only one thread is involved. The nice thing is, that because there is only one thread doing the stuff the code looks quite synchronous to the reader and there is not much need to make your variables thread safe.
It's easy to see that this way your breakfast will be ready in shorter time (and your bread will still be warm!). In computer life these things will happen when your thread has to wait for another process to finish, like writing a file to a disk, getting information from a database or from the internet. Those are typically the kind of functions where'll see an async version of the function: Write
and WriteAsync
, Read
and ReadAsync
.
Addition: after some remarks from other users elsewhere, and some testing, I found that in fact it can be any thread who continues your work after the await. This other thread has the same 'context', and thus can act as if it was the original thread.
Method 3: Hire cooks to toast the bread and boil the eggs while you make the tea: Real asynchronous. Several threads This is the most expensive option, because it involves creating separate threads. In the example of making breakfast, this will probably not speed up the process very much, because relatively large times of the process you are not doing anything anyway. But if for instance you also need to slice tomatoes, it might be handy to let a cook (separate thread) do this while you do the other stuff using async-await. Of course, one of the awaits you do is await for the cook to finish his slicing.
Another article that explains a lot, is Async and Await written by the ever so helpful Stephen Cleary.
The async and await keywords don't cause additional threads to be created. Async methods don't require multithreading because an async method doesn't run on its own thread. The method runs on the current synchronization context and uses time on the thread only when the method is active. You can use Task.Run to move CPU-bound work to a background thread, but a background thread doesn't help with a process that's just waiting for results to become available.
The async-await functionality of .NET is not different from that of other frameworks. It does not give a performance advantage in local computations, but it just allows continuous switching between tasks in a single thread instead of letting one task blocking the thread. If you want a performance gain for local computations, use Task Parallel Library.
Visit https://msdn.microsoft.com/en-us/library/dd460717(v=vs.110).aspx
In short and very general case - No, it usually will not. But it requires few words more, because "performance" can be understood in many ways.
Async/await 'saves time' only when the 'job' is I/O-bound. Any application of it to jobs that are CPU-bound will introduce some performance hits. That's because if you have some computations that take i.e. 10seconds on your CPU(s), then adding async/await - that is: task creation, scheduling and synchronization - will simply add X extra time to that 10seconds that you still need to burn on your CPU(s) to get the job done. Something close to the idea of Amdahl law. Not really it, but quite close.
However, there's some 'but..'s.
First of all, that performance hits often due to introducing async/await are not that large. (especially if you are careful to not overdo it).
Second, since async/await allows you to write I/O-interleaved code much easier, you may notice new opportunities to remove waiting times on I/O in places where you'd be too lazy ( :) ) to do it otherwise or in places where it'd make the code to hard to follow without async/await syntax goodness. For example, splitting the code around network requests is rather obvious thing to do, but you may notice that i.e. you can also upgrade some file i/o at that few places where you write CSVs files or read configuration files, etc. Still, note that the gain here will not be thanks to async/await - it will be thanks to rewriting the code that handles file i/o. You can do that without async/await too.
Third, since some i/o ops are easier, you may notice that offloading the CPU-intensive work to another service or machine is much easier, which can improve your perceived performance too (shorter 'wall-clock' time), but the overall resource consumption will rise: added another machine, spent time on network ops, etc.
Fourth: UI. You really don't want to freeze it. It's so very easy to wrap both I/O-bound and CPU-bound jobs in Tasks and async/await on them and keep the UI responsive. That's why you see it mentioned everywhere. However, while I/O-bound ops ideally should be asynchronous down to the very leaves to remove as much idle waiting time on all lengthy I/O, the CPU-bound jobs does not need to be split or asyncized any more than just 1 level down. Having huge monolithic calculation job wrapped in just one task is just enough to have the UI unblocked. Of course, if you have many processors/cores, it's still worth parallelizing whatever is possible inside, but in contrast to I/O - split too much and you will be busy switching tasks instead of chewing the calculations.
Summarizing: if you have time-taking I/O - async ops can save much time. It's hard to overdo asynchronizing I/O operations. If you have CPU-taking ops, then adding anything will consume more CPU time and more memory in total, but the wall-clock time can be better thanks to splitting the job into smaller parts that maybe can be run on more cores at the same time. It's not hard to overdo it, so you need to be a little careful.
Whenever I read about async-await, the use case example is always one where there's a UI that you don't want to freeze.
That's the most common use case for async
. The other one is in server-side applications, where async
can increase scalability of web servers.
Are there any examples of how one could use async-await to eke out performance benefits in an algorithm?
No.
You can use the Task Parallel Library if you want to do parallel processing. Parallel processing is the use of multiple threads, dividing up parts of an algorithm among multiple cores in a system. Parallel processing is one form of concurrency (doing multiple things at the same time).
Asynchronous code is completely different. The point of async code is to not use the current thread while the operation is in progress. Async code is generally I/O-bound or based off events (like a timer). Asynchronous code is another form of concurrency.
I have an async intro on my blog, as well as a post on how async doesn't use threads.
Note that the tasks used by the Task Parallel Library can be scheduled onto threads and will execute code. The tasks used by the Task-Based Asynchronous Pattern do not have code and do not "execute". Although both types of task are represented by the same type (Task
), they are created and used completely differently; I describe these Delegate Tasks and Promise Tasks in more detail on my blog.
The method runs on the current synchronization context and uses time on the thread only when the method is active. You can use Task.Run to move CPU-bound work to a background thread, but a background thread doesn't help with a process that's just waiting for results to become available.
When you have one CPU and multiple threads in your application, your CPU switches between threads to simulate parallel processing. With async/await your async operation doesn't need thread time, thus you give more time for other threads of your application to do job. For instance your application (non-UI) can still make HTTP calls, and all you need is just wait for the response. This is one of the cases when benefit of using async/await is big.
When you call async DoJobAsync()
don't forget to .ConfigureAwait(false)
to get better performance for non-UI apps that don't need to merge back to UI thread context.
I don't mention nice syntax that helps a lot to keep your code clean.
MSDN