While each microservice generally will have its own data - certain entities are required to be consistent across multiple services.
For such data consistency requiremen
I think there are 2 main forces at play here:
The diagram makes perfect sense to me, but I don't know of any framework to do it out of the box, probably due to the many use-case specific trade-offs involved. I'd approach the problem as follows:
The upstream service emits events on to the message bus, as you've shown. For the purpose of serialisation I'd carefully choose the wire format that doesn't couple the producer and consumer too much. The ones I know of are protobuf and avro. You can evolve your event model upstream without having to change the downstream if it doesn't care about the newly added fields and can do a rolling upgrade if it does.
The downstream services subscribe to the events - the message bus must provide fault-tolerance. We're using kafka for this but since you chose AMQP I'm assuming it gives you what you need.
In case of network failures (e.g. the downstream consumer cannot connect to the broker) if you favour (eventual) consistency over availability you may choose to refuse to serve requests that rely on data that you know can be more stale than some preconfigured threshold.