Concept of vert.x concerning a webserver?

﹥>﹥吖頭↗ 提交于 2021-01-28 23:37:02

问题


I don't quite get how vert.x is applied for a webserver.

The concept I know for webserver is the thread-based one.

  1. You start your webserver, which then is running.
  2. Then for every client that connects, you get a socket, which is then passed to its own thread handler.
  3. The thread handler then processes the tasks for this specific socket.

So it is clearly defined which thread is doing the work for which socket. However for every socket you need a new thread, which is expensive in the long run for many sockets.

Then there is the event-based concept that vert.x supplies. So far I have understood, it should work anyhow like this:

  1. The Vertx instance deploys Verticles.
  2. Verticles run in background threads, but not every Verticle has its own thread. As an example there could be 1000 verticles deployed in a Vertx instance, but the Vertx instance handles only 8 threads (nr of cores * 2).
  3. Then there are the event loops. I'm not sure how they refer to verticles. I've read that every verticle has 2 event loops, but don't really know how that works.

As a webserver example:

class WebServer: AbstractVerticle() {
    lateinit var server: HttpServer

    override fun start() {
        server = vertx.createHttpServer(HttpServerOptions().setPort(1234).setHost("localhost"))
        var router = Router.router(vertx);
        router.route("/test").handler { routingContext ->
            var response = routingContext.response();
            response.end("Hello from my first HttpServer")
        }
        server.requestHandler(router).listen()
    }
}

This WebServer can be deployed multiple times in a Vertx instance. And as it seems, each WebServer instance gets its own thread. When I try to connect 100 Clients and reply with a simple response, it seems like each Client is handled synchronously. Because when I do a Thread.sleep statement in each server handler, then the every second one client gets a response. However it should be that all server handlers should start their 1 second sleep and then almost identically reply to all clients after this time.

This is the code to start 100 clients:

fun main(){
    Vertx.vertx().deployVerticle(object : AbstractVerticle(){
        override fun start() {
            for(i in 0 .. 100)
                MyWebClient(vertx)
        }
    })
}

class MyWebClient(val vertx: Vertx) {
    init {
        println("Client starting ...")
        val webClient = WebClient.create(vertx, WebClientOptions().setDefaultPort(1234).setDefaultHost("localhost"))
        webClient.get("/test").send { ar ->
            if(ar.succeeded()){
                val response: HttpResponse<Buffer> = ar.result()

                println("Received response with status code ${response.statusCode()} + ${response.body()}")
            } else {
                println("Something went wrong " + ar.cause().message)
            }
        }
    }
}

Does anybody know an explanation for this?


回答1:


There are some major issues there.

When you do this:

class WebServer: AbstractVerticle() {
    lateinit var server: HttpServer

    override fun start() {
        server = vertx.createHttpServer(HttpServerOptions().setPort(1234).setHost("localhost"))
       ...
    }
}

Then something like this:

vertx.deployVerticle(WebServer::class.java.name, DeploymentOptions().setInstances(4)

You'll get 4 verticles, but only single one of them will actually listen on the port. So, you're not getting any more concurrency.

Second, when you use Thread.sleep in your Vert.x code, you're blocking the event loop thread.

Third, your test with client is incorrect. Creation of a WebClient is very expensive, so by creating those one after the other, you're actually issuing requests very slowly. If you really want to test your web application, use something like https://github.com/wg/wrk




回答2:


The issue with your code is that by default Vert.x only uses a maximum of one thread per verticle (if there are more verticles than available threads, a single thread has to handle multiple verticles).

Therefore, if you perform 100 requests against a single instance of a single verticle, the requests are processed by a single thread.

To solve your issue, you should deploy multiple instances of your verticle, i.e.

vertx.deployVerticle(MainVerticle::class.java.name, DeploymentOptions().setInstances(4))

when doing that, always 4 responses will be received at nearly the same time, because 4 instances of the verticle are running and thus 4 threads are utilized.

In previous versions of Vert.x, you could also simply configure multi-threading for a verticle if you didn't want to set a specific amount of instances.

vertx.deployVerticle(MainVerticle::class.java.name, DeploymentOptions().setWorker(true).setMultiThreaded(true))

However, this feature has been deprecated and replaced with customer worker pools.

For more information concerning this topic, I encourage you to take a look at the Vert.x-core Kotlin documentation



来源:https://stackoverflow.com/questions/56358648/concept-of-vert-x-concerning-a-webserver

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!