问题
I am working on Kafka Streams using Spring Cloud Stream. In the message processing application, there may be a chance that it will produce an error. So the message should not be commited and retried again.
My application method -
@Bean
public Function<KStream<Object, String>, KStream<String, Long>> process() {
return (input) -> {
KStream<Object, String> kt = input.flatMapValues(v -> Arrays.asList(v.toUpperCase().split("\\W+")));
KGroupedStream<String, String> kgt =kt.map((k, v) -> new KeyValue<>(v, v)).groupByKey(Grouped.with(Serdes.String(), Serdes.String()));
KTable<Windowed<String>, Long> ktable = kgt.windowedBy(TimeWindows.of(500)).count();
KStream<String, WordCount> kst =ktable.toStream().map((k,v) -> {
WordCount wc = new WordCount();
wc.setWord(k.key());
wc.setCount(v);
wc.setStart(new Date(k.window().start()));
wc.setEnd(new Date(k.window().end()));
dao.insert(wc);
return new KeyValue<>(k.key(),wc);
});
return kst.map((k,v) -> new KeyValue<>(k, v.getCount()));
};
}
Here if DAO insert method fails, the message should not be published to output topic and the processing of the same message should be retried.
How can we configure kafka streams binder to do this ?. Any help regarding this is much appreciated.
回答1:
Spring Cloud Stream Kafka Streams binder itself does not provide such retrying mechanisms within the execution of your business logic. However, one way to solve this use case may by wrapping your critical call (dao.insert()
in this case) in a RetryTemplate
that you define locally. Here is a possible implementation that retries 10 times with a backoff policy of 1 second. If you are trying this solution out, make sure to extract the RetryTemplate related common code out of your main business logic. I haven't tried this, but it should work.
KStream<String, WordCount> kst =ktable.toStream().map((k,v) -> {
WordCount wc = new WordCount();
...
org.springframework.retry.support.RetryTemplate retryTemplate = new
RetryTemplate();
RetryPolicy retryPolicy = new SimpleRetryPolicy(10);
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(1000);
retryTemplate.setBackOffPolicy(backOffPolicy);
retryTemplate.setRetryPolicy(retryPolicy);
retryTemplate.execute(context -> {
try {
dao.insert(wc);
}
catch (Exception e) {
throw new IllegalStateException(..);
}
});
return new KeyValue<>(k.key(),wc);
});
Event after retrying the dao insert operation 10 times, if it still fails, the exception will be thrown which will terminate the application in which case the offset will not be committed. On the restart, after fixing the underlying issue, your application should still continue from this offset.
来源:https://stackoverflow.com/questions/62177705/how-to-make-spring-cloud-stream-kafka-streams-binder-retry-processing-a-message