How can I create a queue with multiple workers?

前端 未结 2 646
生来不讨喜
生来不讨喜 2021-01-31 06:19

I want to create a queue where clients can put in requests, then server worker threads can pull them out as they have resources available.

I\'m exploring how I could do

相关标签:
2条回答
  • 2021-01-31 06:59

    This question is pretty old but in case someone makes it here anyway...

    Since mid 2015 Firebase offers something called the Firebase Queue, a fault-tolerant multi-worker job pipeline built on Firebase.

    Q: Is this a good design that will integrate well into the upcoming security plans?

    A: Your design suggestion fits perfectly with Firebase Queue.

    Q: How do I get all the servers to listen to the queue, but only one to pick up each request?

    A: Well, that is pretty much what Firebase Queue does for you!

    References:

    • Introducing Firebase Queue (blog entry)
    • Firebase Queue (official GitHub-repo)
    0 讨论(0)
  • 2021-01-31 07:02

    Wow, great question. This is a usage pattern that we've discussed internally so we'd love to hear about your experience implementing it (support@firebase.com). Here are some thoughts on your questions:

    Authentication

    If your primary goal is actually authentication, just wait for our security features. :-) In particular, we're intending to have the ability to do auth backed by your own backend server, backed by a firebase user store, or backed by 3rd-party providers (Facebook, twitter, etc.).

    Load-balanced Work Queue

    Regardless of auth, there's still an interesting use case for using Firebase as the backbone for some sort of workload balancing system like you describe. For that, there are a couple approaches you could take:

    1. As you describe, have a single work queue that all of your servers watch and remove items from. You can accomplish this using transaction() to remove the items. transaction() deals with conflicts so that only one server's transaction will succeed. If one server beats a second server to a work item, the second server can abort its transaction and try again on the next item in the queue. This approach is nice because it scales automatically as you add and remove servers, but there's an overhead for each transaction attempt since it has to make a round-trip to the firebase servers to make sure nobody else has grabbed the item from the queue already. But if the time it takes to process a work item is much greater than the time to do a round-trip to the Firebase servers, this overhead probably isn't a big deal. If you have lots of servers (i.e. more contention) and/or lots of small work items, the overhead may be a killer.
    2. Push the load-balancing to the client by having them choose randomly among a number of work queues. (e.g. have /queue/0, /queue/1, /queue/2, /queue/3, and have the client randomly choose one). Then each server can monitor one work queue and own all of the processing. In general, this will have the least overhead, but it doesn't scale as seamlessly when you add/remove servers (you'll probably need to keep a separate list of work queues that servers update when they come online, and then have clients monitor the list so they know how many queues there are to choose from, etc.).

    Personally, I'd lean toward option #2 if you want optimal performance. But #1 might be easier for prototyping and be fine at least initially.

    In general, your design is definitely on the right track. If you experiment with implementation and run into problems or have suggestions for our API, let us know (support@firebase.com :-)!

    0 讨论(0)
提交回复
热议问题