问题
TLDR: Can't seem to pass messages from one RabbitMQ VHost to another RabbitMQ VHost.
I'm having an issue with Spring Cloud Dataflow where it appears that despite specifying different RabbitMQ VHosts for source and sink, they don't ever get to the destination Exchange.
My dataflow stream looks like this: RabbitMQ Source | CustomProcessor | RabbitMQ Sink
RabbitMQ Source reads from a queue on vHostA and RabbitMQ Sink should output to ExchangeBlah on vHostB.
However, no messages end up on ExchangeBlah on vHostB, and I get errors in the RabbitMQ Sink log saying:
Channel shutdown: channel error; protocol method: 'method(reply-code=404, reply-text=NOT_FOUND - no exchange 'ExchangeBlah' in vhost 'vHostA', class-id=60, method-id=40)
I've got a feeling that this might be related to the Spring environment variable
spring.cloud.dataflow.applicationProperties.stream.spring.rabbitmq.virtual-host=vhostA
As Dataflow uses queues as communication between the different stages of the Stream, if I don't specify this setting, then the RabbitMQ source and sink communication queues are created on the VHosts specified in their respective configs, however, no communication queue is created for the CustomProcessor. Therefore, data gets stuck in the Source communication queue.
Also, I know that feasibly Shovels can get around this, but it feels like if the option of outputting to a different VHost is available to you in the RabbitMQ sink then it should work.
All being said, it may well be a bug with the Rabbit Stream Source/Sink apps.
UPDATE: Looking at the stream definition (once the stream has been deployed), the spring.rabbitmq.virtual-host switch is defined twice. Once with the vHostB which is defined against the sink and then later with the vHostA which is the Spring property.
Removing the virtual-host application property and explicitly setting spring.rabbitmq.virtual-host, host, username and password on processor (including the RabbitMQ source and sinks), and it makes it's way to the processor communication queue, but as the RabbitMQ sink is set to a different VHost, it doesn't seem to get any further.
In this scenario, the communication queues which are created between the various stages of the stream are created on the same VHost which the source is reading from (vHostA). As we can only give the spring.rabbitmq.virtual-host setting to the apps once, the sink is doesn't know to look at the communication queues to pass that data onto it's destination exchange on vHost B.
It's almost as if there are missing switches on the Source and Sink RabbitMQs, or am I missing an overall setting which defines the VHost of where the communication queues should reside, without overriding the source and destination VHosts on the RabbitMQ source and sinks?
回答1:
Please note that SCDF doesn't directly communicate with RabbitMQ. SCDF attempts to automate the creation of Spring Cloud Stream "env-vars" based on well-defined naming conventions derived from the stream+app names.
It is the Apps themselves that connect to publish/subscribe to RabbitMQ exchanges independently. As far as the right "env-vars" land as properties to the apps when they bootstrap, they should be able to connect as per the configuration.
You pointed out spring.cloud.dataflow.applicationProperties.stream.spring.rabbitmq.virtual-host=vhostA
property. That, if supplied, SCDF attempts to propagate that as the virtual-host
to all the stream applications that it deploys to the targeted platform.
In your case, it sounds like you'd want to override the virtual-host
at the source and the sink level independently, which you can accomplish as specific properties to these Apps in the stream definition, either supplied as in-line or as deployment properties.
Once when you do, you can confirm whether or not they are taking into account by accessing the App's actuator endpoint. Specifically, /configprops
would be useful.
来源:https://stackoverflow.com/questions/56223529/spring-dataflow-move-messages-from-one-rabbit-vhost-to-another