spring-cloud-stream-binder-kafka

Spring cloud stream kafka transaction configuration

不想你离开。 提交于 2021-02-11 15:02:25
问题 I am following this template for Spring-cloud-stream-kafka but got stuck while making the producer method transactional . I have not used kafka earlier so need help with this in case any configuration changes needed in kafka It works well if no transactional configuration added but when transactional configurations are added it gets timed out at startup - 2020-11-21 15:07:55.349 ERROR 20432 --- [ main] o.s.c.s.b.k.p.KafkaTopicProvisioner : Failed to obtain partition information org.apache

Injected dependency in Customized KafkaConsumerInterceptor is NULL with Spring Cloud Stream 3.0.9.RELEASE

坚强是说给别人听的谎言 提交于 2021-02-11 12:57:51
问题 I want to inject a bean into the customized ConsumerInterceptor as ConsumerConfigCustomizer is added in Sprint Cloud Stream 3.0.9.RELEASE. However, the injected bean is always NULL. Foo (The dependency to be injected into MyConsumerInterceptor) public class Foo { public void foo(String what) { System.out.println(what); } } MyConsumerInterceptor (Customized KafkaConsumerInterceptor) public static class MyConsumerInterceptor implements ConsumerInterceptor<String, String> { private Foo foo;

How to make Spring cloud stream Kafka streams binder retry processing a message if a failure occurs during the processing step?

依然范特西╮ 提交于 2020-08-10 02:01:07
问题 I am working on Kafka Streams using Spring Cloud Stream. In the message processing application, there may be a chance that it will produce an error. So the message should not be commited and retried again. My application method - @Bean public Function<KStream<Object, String>, KStream<String, Long>> process() { return (input) -> { KStream<Object, String> kt = input.flatMapValues(v -> Arrays.asList(v.toUpperCase().split("\\W+"))); KGroupedStream<String, String> kgt =kt.map((k, v) -> new

Could not decode json type for key: file_name in a Spring Cloud Data Flow stream

六眼飞鱼酱① 提交于 2020-06-16 19:32:52
问题 I use Spring Cloud Data Flow to set up a stream that read a CSV file, transform it using a custom processor and log it : stream create --name testsourcecsv --definition "file --mode=lines --directory=D:/toto/ --file.filename-pattern=adresses-28.csv --maxMessages=1000 | csvToMap --spring.cloud.stream.bindings.output.content-type=application/json | log --spring.cloud.stream.bindings.input.content-type=application/json" --deploy The file and csvToMap applications work fine, but in the log

Abstracting Spring Cloud Stream Producer and Consumer code

≯℡__Kan透↙ 提交于 2020-05-17 05:52:18
问题 I have a Service that is Producing and Consuming messages from different Spring Cloud Stream Channels (bound to EventHub/Kafka topics). There are several such Services which are setup similarly. The configuration looks like below public interface MessageStreams { String WORKSPACE = "workspace"; String UPLOADNOTIFICATION = "uploadnotification"; String BLOBNOTIFICATION = "blobnotification"; String INGESTIONSTATUS = "ingestionstatusproducer"; @Input(WORKSPACE) SubscribableChannel

Caused by: com.fasterxml.jackson.databind.exc.InvalidDefinitionException: No serializer found for class org.springframework.core.convert.support.Defa

邮差的信 提交于 2020-05-14 12:08:08
问题 I am working on Spring Cloud Stream Apache Kafka example. I am developing code taking reference from : https://www.youtube.com/watch?v=YPDzcmqwCNo. org.springframework.messaging.MessageDeliveryException: failed to send Message to channel 'pvout'; nested exception is org.springframework.messaging.converter.MessageConversionException: Could not write JSON: No serializer found for class org.springframework.core.convert.support.DefaultConversionService and no properties discovered to create

how to set concurrency (or other configurations) for ConcurrentKafkaListenerContainerFactory per StreamListener

青春壹個敷衍的年華 提交于 2020-04-18 00:49:40
问题 We have scenario where our application(spring boot, spring-cloud-stream based) listens to multiple Kafka topics (TOPIC_A with 3 partitions, TOPIC_B with 1 partition,TOPIC_C with 10 partitions) i.e. 3 @StreamListener methods. @StreamListener(TopicASink.INPUT) public void processTopicA(Message<String> msg) { logger.info("** recieved message: {} ", msg.getPayload()); // do some processing } @StreamListener(TopicBSink.INPUT) public void processTopicB(Message<String> msg) { logger.info("**

num.stream.threads creating idle threads

那年仲夏 提交于 2020-03-23 23:20:50
问题 I have a spring boot kafka stream application with 2 topics consider topics A and B.Topic A has 16 partition and Topic B has 1 partition.Consider the application is deployed in 1 instance having num.stream.threads =16. I ran kafka-consumer-groups.bat command to check how the threads are assigned to the partition in group,got the following output.Topic A and B assigned with 16 threads where 14 threads in topic B is idle. kafka-consumer-groups.bat --bootstrap-server 0.0.0.0:9092 --group <topic