I\'m a little confused.
From the documentation:
Play default thread pool - This is the default thread pool in which all application code in P
There are a few ways to handle blocking calls. I can't say which is best, as it would most certainly depend on specific use cases, and require a ton of benchmarking.
By default, Play handles requests using a thread pool with one thread per cpu core. So, if you're running your Play app on a quad-core cpu, for example, it will only be able to handle 4 concurrent requests if they're using blocking calls to the database. So yes, all other incoming requests will have to wait until one of the threads had been freed up.
The simplest solution is to increase the number of threads Play uses to process requests in the default thread pool (in application.conf):
play {
akka {
akka.loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = WARNING
actor {
default-dispatcher = {
fork-join-executor {
parallelism-min = 300
parallelism-max = 300
}
}
}
}
}
The next option is the one you mention in your question--offloading blocking database calls to another ExecutionContext
. You can configure a separate thread pool within application.conf like so:
database-io {
fork-join-executor {
parallelism-factor = 10.0
}
}
This will create a 10 threads per cpu core in the pool called database-io
, and can be accessed within Play like so:
val dbExecutor: ExecutionContext = Akka.system.dispatchers.lookup("database-io")
val something = Future(someBlockingCallToDb())(dbExecutor)
This will allow the default thread pool to handle more requests while it's waiting for the Future
to complete. A third option would be to use an Actor
to handle the database calls, but that's more complicated and beyond the scope of this question.
The bottom line is, yes, use a larger thread pool or a different ExecutionContext
for blocking calls, as you never want to block in the default thread pool if you can help it.
This is all outlined in the Play Documentation for Thread Pools. (latest version)