Spring cloud 整合kafka
Posted dikeboy
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Spring cloud 整合kafka相关的知识,希望对你有一定的参考价值。
Kafka配置
一.安装
wget http://mirror.bit.edu.cn/apache/kafka/2.1.0/kafka_2.11-2.1.0.tgz 获取当前版本
tar -xzvf 解压
二.配置
/修改 kafka目录 config/server.properties
listeners=PLAINTEXT://:9092 监听当前主机
advertised.listeners=PLAINTEXT://Your.Id:9092 配置下外网IP
bin/zookeeper-server-start.sh config/zookeeper.propties 启动zookeeper
[root@iZ23abbedn6Z config]# ../bin/kafka-server-start.sh ../config/server.properties 启动kafka
启动一个远程Custom用于方便测试
[root@iZ23abbedn6Z bin]# sh kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic-test
服务端新建一个topic
bin
/kafka-topics
.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic-test //topic-test topic的名称
或者用JAVA代码新建
public static void main(String[] args) { //创建topic Properties props = new Properties(); props.put("bootstrap.servers", "你Kafka服务器IP:9092"); AdminClient adminClient = AdminClient.create(props); ArrayList<NewTopic> topics = new ArrayList<NewTopic>(); NewTopic newTopic = new NewTopic("topic-test", 1, (short) 1); topics.add(newTopic); CreateTopicsResult result = adminClient.createTopics(topics); try { result.all().get(); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } }
导入当前kafka包
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.1.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>tsearch_web</groupId> <artifactId>kafka-test</artifactId> <version>0.0.1</version> <packaging>jar</packaging> <name>kafka-test</name> <description>Note Server catch</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project>
Kafka Config配置
@Configuration @EnableKafka public class KafkaConfig { @Bean ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>(); factory.setConsumerFactory(consumerFactory()); return factory; } @Bean public ConsumerFactory<Integer, String> consumerFactory() { return new DefaultKafkaConsumerFactory<>(consumerConfigs()); } <!--消费者配置--> @Bean public Map<String, Object> consumerConfigs() { Map<String, Object> props = new HashMap<>(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "你Kafka服务端IP:9092"); props.put(ConsumerConfig.GROUP_ID_CONFIG, "test"); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true); props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100"); props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000"); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); return props; } @Bean public ProducerFactory<String, String> producerFactory() { return new DefaultKafkaProducerFactory<>(producerConfigs()); }
<!--生产者配置--> @Bean public Map<String, Object> producerConfigs() { Map<String, Object> props = new HashMap<>(); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "你Kafka服务端IP:9092"); props.put(ProducerConfig.RETRIES_CONFIG, 0); props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); props.put(ProducerConfig.LINGER_MS_CONFIG, 1); props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class); return props; } @Bean public KafkaTemplate<String, String> kafkaTemplate() { System.out.println("init"); return new KafkaTemplate<String, String>(producerFactory()); } }
Spring Application配置
@SpringBootApplication public class KafkaTestApplication implements CommandLineRunner { public static Logger logger = LoggerFactory.getLogger(KafkaTestApplication.class); public static void main(String[] args) { SpringApplication.run(KafkaTestApplication.class, args).close(); } @Autowired private KafkaTemplate<String, String> template; private final CountDownLatch latch = new CountDownLatch(3); @Override public void run(String... args) throws Exception { System.out.println("run..."); this.template.send("topic-test", "foo1"); this.template.send("topic-test", "foo2"); this.template.send("topic-test", "foo3"); latch.await(60, TimeUnit.SECONDS); logger.info("All received"); } @KafkaListener(topics = "topic-test") public void listen(ConsumerRecord<String, String> cr) throws Exception { logger.info("我是消费者:"+cr.toString()); latch.countDown(); } }
测试结果
远程消费者
以上是关于Spring cloud 整合kafka的主要内容,如果未能解决你的问题,请参考以下文章
Spring Cloud(12)——基于Kafka的Stream实现
将Kafka Streams代码迁移到Spring Cloud Stream吗?
整合spring cloud云服务架构 - 云架构代码结构构建