spring-cloud-stream

Spring Cloud Stream (Kafka) parameterize specified error channel {destination}.{group}.errors

好久不见. 提交于 2021-02-20 03:49:43
问题 I am trying to see if the error channel I am passing to @ServiceActivator can be bounded/parameterized referring the value specified in YAML instead of hardcoding actual destination and consumer group in the code itself. @ServiceActivator( // I do not want to hardcode destination and consumer group here inputChannel = "stream-test-topic.my-consumer-group.errors" ) public void handleError(ErrorMessage errorMessage) { // Getting exception objects Throwable errorMessagePayload = errorMessage

Spring Cloud Stream (Kafka) parameterize specified error channel {destination}.{group}.errors

对着背影说爱祢 提交于 2021-02-20 03:46:47
问题 I am trying to see if the error channel I am passing to @ServiceActivator can be bounded/parameterized referring the value specified in YAML instead of hardcoding actual destination and consumer group in the code itself. @ServiceActivator( // I do not want to hardcode destination and consumer group here inputChannel = "stream-test-topic.my-consumer-group.errors" ) public void handleError(ErrorMessage errorMessage) { // Getting exception objects Throwable errorMessagePayload = errorMessage

Spring boot 2.0.2, interception of Cloud Stream annotations with Aop not working anymore

断了今生、忘了曾经 提交于 2021-02-19 10:00:33
问题 I tried to keep the title as explicit and simple as possible. Basically, I need to intercept the usage of Cloud stream's @Input and @Output annotations. This is needed to automatically add a specific ChannelInterceptor to each MessageChannel. (The behaviour in the preSend method will be slightly different whether the message has been produced or consumed). For example, if I declare this advice @Around("@annotation(org.springframework.cloud.stream.annotation.Input)") public Object

Spring boot 2.0.2, interception of Cloud Stream annotations with Aop not working anymore

你说的曾经没有我的故事 提交于 2021-02-19 09:58:33
问题 I tried to keep the title as explicit and simple as possible. Basically, I need to intercept the usage of Cloud stream's @Input and @Output annotations. This is needed to automatically add a specific ChannelInterceptor to each MessageChannel. (The behaviour in the preSend method will be slightly different whether the message has been produced or consumed). For example, if I declare this advice @Around("@annotation(org.springframework.cloud.stream.annotation.Input)") public Object

Spring Cloud Stream connection with RabbitMQ

余生长醉 提交于 2021-02-17 06:59:22
问题 have a simple Spring-Cloud-Stream project that I try to connect with RabbitMQ, It says its connected but It's not working. Did I do something wrong in the code? Application.properties spring.rabbitmq.host=localhost spring.rabbitmq.port=5672 spring.rabbitmq.username=guest spring.rabbitmq.password=guest spring.cloud.stream.bindings.greetingChannel.destination = greetings server.port=8080 HelloBinding interface package com.gateway.cloudstreamproducerrabbitmq; import org.springframework.cloud

spring cloud dataflow on kubernetes error on deploy

江枫思渺然 提交于 2021-02-11 15:48:05
问题 I am working on spring cloud dataflow stream app. I am able to run Spring cloud data flow server with the skipper running in Cloud Foundry . Now i am trying to run the same with the skipper running in kubernetes cluster and getting below error on deployment even though i am explicitly giving the username in the environment config in deployment. Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: kubernetes_cluster_url:6443/api/v1/namespaces/pocdev

Kafka messages getting lost when consumer goes down

人盡茶涼 提交于 2021-02-11 15:47:23
问题 Hello I am writing a kafka consumer-producer using spring cloud stream . Inside my consumer I save my data to a database , if the database goes down I will exit the application manually .After restarting application if the database is still down as a result the application gets stopped again . Now if i restart the application for the third time the messages received in the middle interval(the two failures) are lost, kafka consumer takes the latest message , also it skips the message on which

Spring cloud stream kafka transaction configuration

不想你离开。 提交于 2021-02-11 15:02:25
问题 I am following this template for Spring-cloud-stream-kafka but got stuck while making the producer method transactional . I have not used kafka earlier so need help with this in case any configuration changes needed in kafka It works well if no transactional configuration added but when transactional configurations are added it gets timed out at startup - 2020-11-21 15:07:55.349 ERROR 20432 --- [ main] o.s.c.s.b.k.p.KafkaTopicProvisioner : Failed to obtain partition information org.apache

Injected dependency in Customized KafkaConsumerInterceptor is NULL with Spring Cloud Stream 3.0.9.RELEASE

坚强是说给别人听的谎言 提交于 2021-02-11 12:57:51
问题 I want to inject a bean into the customized ConsumerInterceptor as ConsumerConfigCustomizer is added in Sprint Cloud Stream 3.0.9.RELEASE. However, the injected bean is always NULL. Foo (The dependency to be injected into MyConsumerInterceptor) public class Foo { public void foo(String what) { System.out.println(what); } } MyConsumerInterceptor (Customized KafkaConsumerInterceptor) public static class MyConsumerInterceptor implements ConsumerInterceptor<String, String> { private Foo foo;

How to run kafka streams effectively with single app instance and single topic partitions?

◇◆丶佛笑我妖孽 提交于 2021-02-10 20:27:40
问题 Current setup - I am streaming data from 16 single partitioned topics and doing KTable-KTable joins and sending an output with aggregated data from all streams. I am also materializing each KTable to local state-store. Scenarios - When I tried running two app instances, I was expecting it kafka streams to run on single instance but for some reason it ran on other instance too. Looks like it can created stream task on other app instance during kafka streams failure on instance#1 during to some