问题
I am looking to deploy a spring boot app in an environment that only allows Kafka to be run by the aforementioned app. My app will be a Kafka producer and consumer. Is there a way to run an in memory instance on startup that can be used for purposes other than testing? Alternatively, is there a way to startup a spring boot app that will not fail if it cannot connect to Kafka as a producer and consumer?
edit: it is a temporary solution until we are able to deploy Kafka in this environment. The app does not produce and consume its own records. It is one part of a multi app deployment where each app both produces for other apps and consumes other apps Kafka topics. I see a lot of info around app startup when Kafka is not available for a consumer, but not much is out there with regards to producers. My app will be doing both.
回答1:
What would be the purpose of such an application (produce and consume its own records)? The embedded broker is not designed for production use.
Since version 2.3.4, the container property missingTopicsFatal
is false by default, which will allow the container to start even if the broker is not available. With earlier versions, you can set it to false to get the same effect.
When it is true, the container connects to the broker during startup to verify that the topic(s) exist.
You can also set the container's autoStartup=false
to prevent the container from starting at all.
EDIT
I wouldn't recommend using this in production, but you can remove the test
scope from spring-kafka-test
and declare the broker as a @Bean
...
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<!-- <scope>test</scope> -->
</dependency>
@Bean
EmbeddedKafkaBroker broker() {
return new EmbeddedKafkaBroker(1)
.kafkaPorts(9092)
.brokerListProperty("spring.kafka.bootstrap-servers"); // override application property
}
I just tested it with this app...
@SpringBootApplication
public class So63812994Application {
public static void main(String[] args) {
SpringApplication.run(So63812994Application.class, args);
}
@Bean
EmbeddedKafkaBroker broker() {
return new EmbeddedKafkaBroker(1)
.kafkaPorts(9092)
.brokerListProperty("spring.kafka.bootstrap-servers");
}
@Bean
public NewTopic topic() {
return TopicBuilder.name("so63812994").partitions(1).replicas(1).build();
}
@KafkaListener(id = "so63812994", topics = "so63812994")
public void listen(String in) {
System.out.println(in);
}
@Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
template.send("so63812994", "foo");
};
}
}
spring.kafka.bootstrap-servers=realKafka:9092
spring.kafka.consumer.auto-offset-reset=earliest
EDIT2
With the above configuration, other applications on the same host can connect with localhost:9092
.
If you need remote access to this embedded broker, you will need some additional configuration:
@Bean
EmbeddedKafkaBroker broker() {
return new EmbeddedKafkaBroker(1)
.kafkaPorts(9092)
.brokerProperty("listeners", "PLAINTEXT://localhost:9092,REMOTE://10.0.0.20:9093")
.brokerProperty("advertised.listeners", "PLAINTEXT://localhost:9092,REMOTE://10.0.0.20:9093")
.brokerProperty("listener.security.protocol.map", "PLAINTEXT:PLAINTEXT,REMOTE:PLAINTEXT")
.brokerListProperty("spring.kafka.bootstrap-servers");
}
You can then connect from other servers with 10.0.0.20:9093
.
来源:https://stackoverflow.com/questions/63812994/how-do-i-implement-in-memory-or-embedded-kafka-not-for-testing-purposes