问题
Scala Execution Context and Dispatchers - Listing and comparison: Why ?
There are a lot of questions around what/how/what is the best Execution Context
to use to execute futures on in Scala and how to configure the dispatcher.
Still I never was able to find a longer list with pros and cons and configuration examples.
The best I could find was in the Akka Documentation: http://doc.akka.io/docs/akka/snapshot/scala/dispatchers.html and Play Documentation https://www.playframework.com/documentation/2.5.x/ThreadPools.
I would like to ask what configurations besides the scala.concurrent.ExecutionContext.Implicits.global
and Akka defaults you use in your daily Dev lives, when you use them and what are the pros and cons .
Here are some of the ones I already have:
First unfinished overview
Standard: scala.concurrent.ExecutionContext.Implicits.global
use when unsure
Easy to use
- shared for everything
may use up all your CPU
more Info: http://www.scala-lang.org/api/2.11.5/index.html#scala.concurrent.ExecutionContext
Testing - ExecutionContext.fromExecutor(new ForkJoinPool(1))
- use for testing
- no parallelism
Play's default EC - play.api.libs.concurrent.Execution.Implicits._
- use instead of
scala.concurrent.ExecutionContext.Implicits.global
when using Play - Play default
shared
more Info: https://www.playframework.com/documentation/2.5.x/ThreadPools
Akka`s default Execution Context
based on configuration
more Info: http://doc.akka.io/docs/akka/snapshot/scala/dispatchers.html
Bulkheading
ExecutionContext.fromExecutor(new ForkJoinPool(n)) based on an separated dispatcher . Thanks to Sergiy Prydatchenko
回答1:
Ideally with only non-blocking code you would just use the frameworks execution context. Play Frameworks or Akka's.
But sometimes you have to use blocking API's. In one Play Framework and JDBC project, we followed their recommendation [1] and set the execution context to have 100 threads, and just used the default everywhere. That system was very fast for its usage and needs.
In a different Akka project where we had a mix of blocking and non-blocking code we had seperate dispatchers configured for the different features. Like "blocking-dispatcher", "important-feature-dispatcher" and "default-dispatcher". This performed fine, but was more complex than having 1 dispatcher, we had to know/guess/monitor how much each needed. We load tested it and found that at 1 thread it was too slow, if we had 5 threads it was better but after 10 threads it didnt get any faster. So we left it at 10 threads. Eventually we refactored away our blocking code and moved everything to the default.
But each use case is different, you need to profile and monitor your system to know whats right for you. If you have all non blocking code its easy, it should be 1 thread per CPU core.
[1] https://www.playframework.com/documentation/2.5.x/ThreadPools#Highly-synchronous
来源:https://stackoverflow.com/questions/34117252/execution-context-and-dispatcher-best-practices-useful-configurations-and-doc