Data Consistency Across Microservices

前端 未结 6 552
滥情空心
滥情空心 2021-01-30 02:41

While each microservice generally will have its own data - certain entities are required to be consistent across multiple services.

For such data consistency requiremen

6条回答
  •  梦谈多话
    2021-01-30 03:16

    I think there are 2 main forces at play here:

    • decoupling - that's why you have microservices in the first place and want a shared-nothing approach to data persistence
    • consistency requirement - if I understood correctly you're already fine with eventual consistency

    The diagram makes perfect sense to me, but I don't know of any framework to do it out of the box, probably due to the many use-case specific trade-offs involved. I'd approach the problem as follows:

    The upstream service emits events on to the message bus, as you've shown. For the purpose of serialisation I'd carefully choose the wire format that doesn't couple the producer and consumer too much. The ones I know of are protobuf and avro. You can evolve your event model upstream without having to change the downstream if it doesn't care about the newly added fields and can do a rolling upgrade if it does.

    The downstream services subscribe to the events - the message bus must provide fault-tolerance. We're using kafka for this but since you chose AMQP I'm assuming it gives you what you need.

    In case of network failures (e.g. the downstream consumer cannot connect to the broker) if you favour (eventual) consistency over availability you may choose to refuse to serve requests that rely on data that you know can be more stale than some preconfigured threshold.

提交回复
热议问题