How to create a non-blocking @RestController webservice in spring?

孤街浪徒 提交于 2019-12-03 08:45:25

there are a few solutions, such as working with asynchronous requests. In those cases, a thread will become free again as soon as the CompletableFuture, DeferredResult, Callable, ... is returned (and not necessarily completed).


For example, let's say we configure Tomcat like this:

server.tomcat.max-threads=5 # Default = 200

And we have the following controller:

@GetMapping("/bar")
public CompletableFuture<String> getSlowBar() {
    return CompletableFuture.supplyAsync(() -> {
        silentSleep(10000L);
        return "Bar";
    });
}

@GetMapping("/baz")
public String getSlowBaz() {
    logger.info("Baz");
    silentSleep(10000L);
    return "Baz";
}

If we would fire 100 requests at once, you would have to wait at least 200 seconds before all the getSlowBar() calls are handled, since only 5 can be handled at a given time. With the asynchronous request on the other hand, you would have to wait at least 10 seconds, because all requests will likely be handled at once, and then the thread is available for others to use.

Is there a difference between CompletableFuture, Callable and DeferredResult? There isn't any difference result-wise, they all behave the similarly.

The way you have to handle threading is a bit different though:

  • With Callable, you rely on Spring executing the Callable using a TaskExecutor
  • With DeferredResult you have to to he thread-handling by yourself. For example by executing the logic within the ForkJoinPool.commonPool().
  • With CompletableFuture, you can either rely on the default thread pool (ForkJoinPool.commonPool()) or you can specify your own thread pool.

Other than that, CompletableFuture and Callable are part of the Java specification, while DeferredResult is a part of the Spring framework.


Be aware though, even though threads are released, connections are still kept open to the client. This means that with both approaches, the maximum amount of requests that can be handled at once is limited by 10000, and can be configured with:

server.tomcat.max-connections=100 # Default = 10000

in my opinion.the async may be better for the sever.for this particular api, async not works well.the clients also hold the connections. finally it will eating up "max-connections".you can send the request to messagequeue(kafka)and return success to clients. then you get the request and pass it to the slow sevice.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!