I\'m reading \"RESTful Java with JAX-RS 2.0\" book. I\'m completely confused with asynchronous JAX-RS, so I ask all questions in one. The book writes asynchronous server like th
The throughput of the service improves if different thread pools manage request I/O and request processing. Freeing up the request-I/O thread managed by the container allows it to receive the next request, prepare it for processing and feed into the request-processing-thread-pool when a request-processing-thread has been released.
I share your view expressed in question 1. Let me just add a little detail that the webserver thread doesn't die, it typically comes from a pool and frees itself for another web request. But that doesn't really change much in terms of efficiency of async processing. In those examples, async processing is merely used to pass the processing from one thread pool to another. I don't see any point at all in that.
But there is one use-case where I think async makes sense, eg. when you want to register multiple clients to wait for an event and send a response to all of them once the event occurs. It is described in this article: http://java.dzone.com/articles/whats-new-jax-rs-20
Executive Summary: You're over-thinking this.
In example 1 and 2 web server thread(the one that handle request) dies and we create another background thread. The whole idea behind asynchronous server is to reduce idle threads. These examples are not reducing idle threads. One threads dies and another one born.
Neither is particularly great, to be honest. In a production service, you wouldn't hold the executor in a private field like that but instead would have it as a separately configured object (e.g., its own Spring bean). On the other hand, such a sophisticated example would be rather harder for you to understand without a lot more context; applications that consist of systems of beans/managed resources have to be built to be that way from the ground up. It's also not very important for small-scale work to be very careful about this, and that's a lot of web applications.
The gripping hand is that the recovery from server restart is actually not something to worry about too much in the first place. If the server restarts you'll probably lose all the connections anyway, and if those AsyncResponse
objects aren't Serializable
in some way (no guarantee that they are or aren't), you can't store them in a database to enable recovery. Best to not worry about it too much as there's not much you can do! (Clients are also going to time out after a while if they don't get any response back; you can't hold them indefinitely.)
I thought creating unmanaged threads inside container is a bad idea. We should only use managed threads using concurrency utilities in Java EE 7.
It's an example! Supply the executor from outside however you want for your fancy production system.
Again one of ideas behind async servers is to scale. Example 3 does not scale, does it?
It's just enqueueing an object on a list, which isn't a very slow operation at all, especially when compared with the cost of all the networking and deserializing/serializing going on. What it doesn't show is the other parts of the application which take things off that list, perform the processing, and yield the result back; they could be poorly implemented and cause problems, or they could be done carefully and the system work well.
If you can do it better in your code, by all means do so. (Just be aware that you can't store the work items in the database, or at least you can't know for sure that you can do that, even if it happens to be actually possible. I doubt it though; there's likely information about the TCP network connection in there, and that's never easy to store and restore fully.)