Dart: handle incoming HTTP requests in parallel

前端 未结 3 1206
情深已故
情深已故 2020-12-31 01:06

I am trying to write an HTTP server in Dart that can handle multiple requests in parallel. I have been unsuccessful at achieving the \"parallel\" part thus far.

Her

相关标签:
3条回答
  • 2020-12-31 01:43

    Even with the current HttpServer limitations it is possible to utilise multiple cores by running multiple server processes behind a reverse proxy server like Apache or Nginx. From within Dart you can also fork child processes to split out compute intensive tasks.

    A good place to start would be to read about scaling node.js, as this also uses a single thread per-process architecture.

    Edit: The answer is now out of date, it is now possible to share requests between isolates allowing a Dart process to use multiple cores.

    See the docs for ServerSocket.bind(shared).

    "The optional argument shared specify whether additional binds to the same address, port and v6Only combination is possible from the same Dart process. If shared is true and additional binds are performed, then the incoming connections will be distributed between that set of ServerSockets. One way of using this is to have number of isolates between which incoming connections are distributed."

    0 讨论(0)
  • 2020-12-31 01:46

    You need to:

    1. Set shared: true in HttpServer.bind
    2. Spawn some Isolates to handle the incoming requests in parallel.

    Here's a barebones, minimal Dart server that distributes the incoming requests across 6 Isolates:

    import 'dart:io';
    import 'dart:isolate';
    
    void main() async {
      for (var i = 1; i < 6; i++) {
        Isolate.spawn(_startServer, []);
      }
    
      // Bind one server in current Isolate
      _startServer();
    
      print('Serving at http://localhost:8080/');
      await ProcessSignal.sigterm.watch().first;
    }
    
    void _startServer([List args]) async {
      final server = await HttpServer.bind(
        InternetAddress.loopbackIPv4,
        8080,
        shared: true, // This is the magic sauce
      );
    
      await for (final request in server) {
        _handleRequest(request);
      }
    }
    
    void _handleRequest(HttpRequest request) async {
      // Fake delay
      await Future.delayed(const Duration(seconds: 2));
    
      request.response.write('Hello, world!');
      await request.response.close();
    }
    
    0 讨论(0)
  • 2020-12-31 01:59

    I wrote a library called dart-isoserver to do this a while back. It's severely bit rotted now, but you can see the approach.

    https://code.google.com/p/dart-isoserver/

    What I did was proxy HttpRequest and HttpResponse via isolate ports, since you cannot send them directly. It worked, though there were a few caveats:

    1. I/O on the request and response went though the main isolate, so that part was not parallel. Other work done in the worker isolate didn't block the main isolate though. What really should happen is that a socket connection should be transferrable between isolates.
    2. Exceptions in the isolate would bring down the whole server. spawnFunction() now has an uncaught exception handler parameter, so this is somewhat fixable, but spawnUri() doesn't. dart-isoserver used spawnUri() to implement hot-loading, so that would have to be removed.
    3. Isolates are a little slow to start up, and you probably don't want one per connection for the thousands of concurrent connection use cases that nginx and node.js target. An isolate pool with work queues would probably perform better, though that would eliminate the nice feature you could use blocking I/O in a worker.

    A note about your first code example. That definitely won't run in parallel, as you noticed, because Dart is single-threaded. No Dart code in the same isolate ever runs concurrently.

    0 讨论(0)
提交回复
热议问题