Sending HTTP response after consuming a Kafka topic

后端 未结 2 1777
野的像风
野的像风 2021-01-07 06:59

I’m currently writing a web application that has a bunch of microservices. I’m currently exploring how to properly communicate between all these services and I’ve decided to

相关标签:
2条回答
  • 2021-01-07 07:26

    Let's say you're going to create an order. This is how it should work:

    1. Traditionally we used to have an auto-increment field or a sequence in the RDBMS table to create an order id. However, this means order id is not generated until we save the order in DB. Now, when writing data in Kafka, we're not immediately writing to the DB and Kafka cannot generate order id. Hence you need to use some scalable id generation utility like Twitter Snowflake or something with the similar architecture so that you can generate an order id even before writing the order in Kafka

    2. Once you have the order id, write a single event message on Kafka topic atomically (all-or-nothing). Once this is successfully done, you can send back a success response to the client. Do not write to multiple topics at this stage as you'll lose atomicity by writing to multiple topics. You can always have multiple consumer groups that write the event to multiple other topics. One consumer group should write the data in some persistent DB for querying

    3. You now need to address the read-your-own-write i.e. immediately after receiving success response the user would want to see the order. But your DB is probably not yet updated with the order data. To acheive this, write the order data to a distributed cache like Redis or Memcached immediately after writing the order data to Kafka and before returning the success response. When the user reads the order, the cached data is returned

    4. Now you need to keep the cache updated with the latest order status. That you can always do with a Kafka consumer reading the order status from a Kafka topic

    5. To ensure that you don't need to keep all orders in cache memory. You can evict data based on LRU. If while reading an order, the data is not on cache, it will be read from the DB and written to the cache for future requests

    6. Finally, if you want to ensure that the ordered item is reserved for the order so that no one else can take, like booking a flight seat, or the last copy of a book, you need a consensus algorithm. You can use Apache Zookeeper for that and create a distribured lock on the item

    0 讨论(0)
  • 2021-01-07 07:31

    Do you have an option to create more endpoints in the gateway?

    I would have the POST endpoint dedicated just for producing the message to the Kafka queue, which the other microservice will consume. And as a returned object from the endpoint, it'll contain some sort of reference or id to get the status of the message.

    And create another GET endpoint in the gateway where you can retrieve the status of the message with the reference of the message you got when you created it.

    0 讨论(0)
提交回复
热议问题