spring boot整合kafka之后没有办法做到动态配置是否启用kafka,因此将kafka支持做成插件的形式
1、创建子模块(以longc-plugin-kafka为例),pom配置如下:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>longc</artifactId>
<groupId>com.longc</groupId>
<version>1.0-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>longc-plugin-kafka</artifactId>
<name>longc-plugin-kafka</name>
<url>http://www.example.com</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<!-- spring -->
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>${spring-kafka.version}</version>
</dependency>
<dependency>
<groupId>com.longc</groupId>
<artifactId>longc-core</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
</dependencies>
</project>
此处longc-core为spring-boot项目基础包
2、添加kafka配置:
spring:
kafka:
# 以逗号分隔的地址列表,用于建立与Kafka集群的初始连接(kafka 默认的端口号为9092)
bootstrap-servers: 127.0.0.1:9092
producer:
# 发生错误后,消息重发的次数。
# retries: 0
#当有多个消息需要被发送到同一个分区时,生产者会把它们放在同一个批次里。该参数指定了一个批次可以使用的内存大小,按照字节数计算。
# batch-size: 16384
# 设置生产者内存缓冲区的大小。
# buffer-memory: 33554432
# 键的序列化方式
key-serializer: org.apache.kafka.common.serialization.StringSerializer
# 值的序列化方式
value-serializer: org.apache.kafka.common.serialization.StringSerializer
# acks=0 : 生产者在成功写入消息之前不会等待任何来自服务器的响应。
# acks=1 : 只要集群的首领节点收到消息,生产者就会收到一个来自服务器成功响应。
# acks=all :只有当所有参与复制的节点全部收到消息时,生产者才会收到一个来自服务器的成功响应。
# acks: 1
consumer:
# 自动提交的时间间隔 在spring boot 2.X 版本中这里采用的是值的类型为Duration 需要符合特定的格式,如1S,1M,2H,5D
# auto-commit-interval: 1S
# 该属性指定了消费者在读取一个没有偏移量的分区或者偏移量无效的情况下该作何处理:
# latest(默认值)在偏移量无效的情况下,消费者将从最新的记录开始读取数据(在消费者启动之后生成的记录)
# earliest :在偏移量无效的情况下,消费者将从起始位置读取分区的记录
# auto-offset-reset: earliest
# 是否自动提交偏移量,默认值是true,为了避免出现重复数据和数据丢失,可以把它设置为false,然后手动提交偏移量
# enable-auto-commit: true
# 键的反序列化方式
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
# 值的反序列化方式
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
# listener:
# 在侦听器容器中运行的线程数。
# concurrency: 5
配置的含义可参考https://spring.io/projects/spring-kafka#learn
3、kafka配置类(spring默认的配置有一些参数不会赋值,因此可自行实现)
示例:
/**
* kafka配置
* Created by log.chang on 2019/6/29.
*/
@Configuration
@SuppressWarnings("all")
public class KafkaConfig {
@Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
@Value("${spring.kafka.listener.pool-timeout:15000}")
private String listenerPoolTimeout;
@Value("${spring.kafka.producer.acks:-1}")
private String producerAcks;
@Value("${spring.kafka.producer.batch-size:5}")
private String producerBatchSize;
@Value("${spring.kafka.consumer.group-id:default}")
private String consumerGroupId;
@Value("${consumer.concurrency:10}")
private int consumerConcurrency = 10;
@Value("${spring.kafka.consumer.enable-auto-commit:true}")
private boolean consumerEnableAutoCommit;
@Value("${spring.kafka.consumer.max-poll-records:5}")
private int consumerMaxPollRecords;
/**
* ===========================生产者配置==========================
*/
/**
* 创建生产者配置map,ProducerConfig中的可配置属性比spring boot自动配置要多
*/
private Map<String, Object> producerProperties() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.ACKS_CONFIG, producerAcks);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, producerBatchSize);
//props.put(ProducerConfig.LINGER_MS_CONFIG, 500);
return props;
}
/**
* 不使用spring boot的KafkaAutoConfiguration默认方式创建的DefaultKafkaProducerFactory,重新定义
*/
@Bean("produceFactory")
public DefaultKafkaProducerFactory produceFactory() {
return new DefaultKafkaProducerFactory(producerProperties());
}
/**
* 不使用spring boot的KafkaAutoConfiguration默认方式创建的KafkaTemplate,重新定义
*/
@Bean("kafkaTemplate")
public KafkaTemplate kafkaTemplate(DefaultKafkaProducerFactory produceFactory) {
return new KafkaTemplate(produceFactory);
}
/**
* ===========================消费者配置==========================
*/
/**
* 构造消费者属性map,ConsumerConfig中的可配置属性比spring boot自动配置要多
*/
private Map<String, Object> consumerProperties() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, consumerEnableAutoCommit);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, listenerPoolTimeout);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, consumerMaxPollRecords);
props.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
return props;
}
/**
* 不使用spring boot默认方式创建的DefaultKafkaConsumerFactory,重新定义创建方式
*/
@Bean("consumerFactory")
public DefaultKafkaConsumerFactory consumerFactory() {
return new DefaultKafkaConsumerFactory(consumerProperties());
}
@Bean("listenerContainerFactory")
//个性化定义消费者
public ConcurrentKafkaListenerContainerFactory listenerContainerFactory(DefaultKafkaConsumerFactory consumerFactory) {
//指定使用DefaultKafkaConsumerFactory
ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory);
//设置消费者ack模式为手动,看需求设置
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
//设置可批量拉取消息消费,拉取数量一次3,看需求设置
factory.setConcurrency(3);
factory.setBatchListener(true);
return factory;
}
}
此时已经可以直接在项目中注入kafkaTemplate进行消息生产,使用@KafkaListener注解进行消息消费,此处希望能做一个通用的工具,因此做了以下封装:
消息事件:
/**
* kafka事件数据
* Created by log.chang on 2019/7/1.
*/
@Data
public class KafkaEvent<T extends KafkaEventData> {
public KafkaEvent(T data) {
this.data = data;
this.dataClazz = data.getClass().getName();
}
/**
* 数据类型(用于消费者进行消费)
*/
private String dataClazz;
/**
* kafka消息数据
*/
private T data;
}
import java.io.Serializable;
/**
* kafka事件数据
* Created by log.chang on 2019/7/1.
*/
public class KafkaEventData implements Serializable {
public KafkaEventData() {
}
}
消息处理:
/**
* kafka事件处理
* 与KafkaEvent配合使用,发送kafka消息使用 kafkaEvent,泛型为KafkaEventData的子类
* 对应的Handler可消费KafkaEventData对应的消息
* Created by log.chang on 2019/6/29.
*/
public abstract class KafkaEventHandler<T extends KafkaEventData> {
public abstract void handle(String key, T event);
}
生产者:
@Component
@Slf4j
public class KafkaProducer {
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void send(String key, KafkaEvent data) {
send(DEFAULT_TOPIC, key, data);
}
public void send(String topic, String key, KafkaEvent data) {
if (data == null) {
throw new LongcException(RespCode.MUST_PARAM_NULL, RespCode.MUST_PARAM_NULL_MSG);
}
String dataJson = JsonUtil.toJsonSnake(data);
if (StringUtil.isTrimBlank(dataJson)) {
throw new LongcException(RespCode.MUST_PARAM_NULL, RespCode.MUST_PARAM_NULL_MSG);
}
result(kafkaTemplate.send(topic, key, dataJson), topic, key, dataJson);
}
/**
* 指定分区
*
* @param partition 指定的分区
*/
public void send(Integer partition, String topic, String key, KafkaEvent data) {
if (data == null) {
throw new LongcException(RespCode.MUST_PARAM_NULL, RespCode.MUST_PARAM_NULL_MSG);
}
String dataJson = JsonUtil.toJsonSnake(data);
if (StringUtil.isTrimBlank(dataJson)) {
throw new LongcException(RespCode.MUST_PARAM_NULL, RespCode.MUST_PARAM_NULL_MSG);
}
result(kafkaTemplate.send(topic, partition, key, dataJson), topic, key, dataJson);
}
/**
* 指定分区+时间戳
*
* @param partition 指定的分区
* @param timestamp 记录的时间戳,以自epoch以来的毫秒为单位。如果为空,则生产者将分配 使用System.CurrentTimeMillis()的时间戳。
*/
public void send(Integer partition, String topic, String key, KafkaEvent data, Long timestamp) {
if (data == null) {
throw new LongcException(RespCode.MUST_PARAM_NULL, RespCode.MUST_PARAM_NULL_MSG);
}
String dataJson = JsonUtil.toJsonSnake(data);
if (StringUtil.isTrimBlank(dataJson)) {
throw new LongcException(RespCode.MUST_PARAM_NULL, RespCode.MUST_PARAM_NULL_MSG);
}
result(kafkaTemplate.send(topic, partition, timestamp, key, dataJson), topic, key, dataJson);
}
private void result(ListenableFuture<SendResult<String, String>> resFu, String topic, String key, String dataJson) {
//发送成功回调
SuccessCallback<SendResult<String, String>> successCallback = sendResult -> {
// kafka消息发送成功
log.info(MessageFormat.format("KafkaProducer send success topic->{} key->{} data->{}", sendResult.getProducerRecord().topic(), sendResult.getProducerRecord().key(), sendResult.getProducerRecord().value()));
};
//发送失败回调
FailureCallback failureCallback = ex -> {
// kafka消息发送失败
String errorMsg = "KafkaProducer send error topic->" + topic + " key->" + key + " data->" + dataJson;
log.error(errorMsg, ex);
throw new LongcException(errorMsg);
};
resFu.addCallback(successCallback, failureCallback);
}
}
消费者:
@Component
@Slf4j
@SuppressWarnings("all")
public class KafkaConsumer {
private static final Map<String, Class> handlerClassMap = new HashMap<>();
static {
try {
Reflections reflections = new Reflections("com.longc");
Class handlerClazz = KafkaEventHandler.class;
Set<Class> childHandlerClazzSet = reflections.getSubTypesOf(handlerClazz);
if (childHandlerClazzSet != null) {
for (Class childHandlerClazz : childHandlerClazzSet) {
ParameterizedType type = (ParameterizedType) childHandlerClazz.getGenericSuperclass();
String typeName = type.getActualTypeArguments()[0].getTypeName();
log.info("KafkaConsumer registryReceiver typeName->{} childHandlerClazz->{}", typeName, childHandlerClazz);
handlerClassMap.put(typeName, childHandlerClazz);
}
}
} catch (Exception ex) {
}
}
/**
* listenerContainerFactory设置了批量拉取消息,因此参数是List<ConsumerRecord<Integer, String>>,否则是ConsumerRecord\
*/
@KafkaListener(topics = {"DEFAULT-TOPIC"})
public void registryReceiver(ConsumerRecord<String, String> record) {
log.info("KafkaConsumer registryReceiver record->{} ", record);
// ConsumerRecord(topic = DEFAULT-TOPIC, partition = 0, offset = 5, CreateTime = 1561971499451, serialized key size = 14, serialized value size = 83, headers = RecordHeaders(headers = [], isReadOnly = false), key = kafka-test-key, value = {"data_clazz":"com.longc.demo.entity.TestKafkaData","data":{"key":"k","value":"v"}})
String key = record.key();
String dataJson = record.value();
String dataClassName = JsonUtil.elementToObjSnake(dataJson, "data_clazz", String.class);
Class<?> clazz = ReflectUtil.getClass(dataClassName);
KafkaEventData data = (KafkaEventData) JsonUtil.elementToObjSnake(dataJson, "data", clazz);
if (!handlerClassMap.containsKey(dataClassName)) {
return;
}
Class<?> handlerClass = handlerClassMap.get(dataClassName);
KafkaEventHandler handler = (KafkaEventHandler) ReflectUtil.instance(handlerClass);
if (handler == null) {
return;
}
handler.handle(key, data);
}
}
消费者利用org.reflections包中的反射获取处理类的子类,根据泛型反射出对应的处理类进行消费者逻辑处理
使用kafka的项目引入以上插件包并在spring-boot配置中添加kafka相关配置,继承kafka消息数据类定义消息对象,再继承消息处理类(泛型为定义好的消息数据类)实现hanle方法即可
来源:oschina
链接:https://my.oschina.net/u/4414230/blog/3480066