(原创)kafka及zookeeper集群设置实战

Posted ZH220私人使用

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了(原创)kafka及zookeeper集群设置实战相关的知识,希望对你有一定的参考价值。

1. zookeeper集群必须是奇数个,

   配置文件conf/zoo.cfg增加

   server.1=172.17.2.242:2888:3888   #242机器的/opt/zookeeper/zkdata/myid必须为1   

server.2=172.17.2.243:2888:3888 #243机器的/opt/zookeeper/zkdata/myid必须为2   

server.3=172.17.2.244:2888:3888 #244机器的/opt/zookeeper/zkdata/myid必须为3


   ./zkServer.sh start即可,status看状态,会有一个leader两个follower


2. kafka集群可以是1个或者N个,

   配置文件config/server.properties必须修改一下两点:

   broker.id=1   #不能重复,分别是0,1,2等,否则只有一台

   zookeeper.connect=172.17.2.242:2181,172.17.2.243:2181,172.17.2.244:2181



   启动集群:(所有kafka都要启动,启动几个就有几个broker,后面复制份数与这有关)

   ./kafka-server-start.sh -daemon ../config/server.properties

   

   创建TOPIC(通道)

   ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic luis


   生产者(localhost可换成任意broker的IP):

   ./kafka-console-producer.sh --broker-list localhost:9092 --topic luis


   消费者(localhost可换成任意broker的IP):

   ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic luis --from-beginning


   更好的消费者方法(KAFKA服务器列表,第一个失败会继续找后面的,暂时发现WINDOWS上有问题,LINUX无问题):

   ./kafka-console-consumer.sh --bootstrap-server 172.17.2.243:9092,172.17.2.244:9092,172.17.2.242:9092 --topic xxx


   列出TOPIC(localhost可换成任意broker的IP):

   ./kafka-topics.sh --list --zookeeper localhost:2181


3. JAVA API;

生产者:

package com.zyhd;


import java.util.Properties;  

import java.util.concurrent.TimeUnit;


import org.apache.kafka.clients.producer.KafkaProducer;

import org.apache.kafka.clients.producer.Producer;

import org.apache.kafka.clients.producer.ProducerRecord; 

  

  

public class KafkaProduce{          

      

    public static void main(String[] args) {  

    Properties props = new Properties();

    props.put("bootstrap.servers", "172.17.2.242:9092");

    props.put("acks", "all");

    props.put("retries", 0);

    props.put("batch.size", 16384);

    props.put("linger.ms", 1);

    props.put("buffer.memory", 33554432);

    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");

    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");


    Producer<String, String> producer = new KafkaProducer<String, String>(props);

    for(int i = 0; i < 10; i++)

        producer.send(new ProducerRecord<String, String>("xxx", Integer.toString(i), Integer.toString(i)));


    producer.close();

         System.out.println("end task");

    }  

       

}  


消费者:

package com.zyhd;


import java.nio.ByteBuffer;

import java.util.Arrays;

import java.util.HashMap;

import java.util.List;

import java.util.Map;

import java.util.Properties;

import java.util.concurrent.ExecutorService;

import java.util.concurrent.Executors;


import org.apache.kafka.clients.consumer.ConsumerRecord;

import org.apache.kafka.clients.consumer.ConsumerRecords;

import org.apache.kafka.clients.consumer.KafkaConsumer;


public class KafkaCustome {


public static void main(String[] args) {

Properties props = new Properties();

    props.put("bootstrap.servers", "172.17.2.242:9092");

    props.put("group.id", "test");

    props.put("enable.auto.commit", "true");

    props.put("auto.commit.interval.ms", "1000");

    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

    KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

    consumer.subscribe(Arrays.asList("xxx"));

    while (true) {

        ConsumerRecords<String, String> records = consumer.poll(10);

        for (ConsumerRecord<String, String> record : records)

            System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());

    }

}

}


4.互操作:

java -jar kafkaDemo.jar 消费者设为启动类

java -cp kafkaDemo.jar com.zyhd.KafkaProduce 显示调用生产者


5. WINDOWS和LINUX都可以,奇数个ZOOKEEPER即可,不需要所有kafka都有ZOOKEEPER


以上是关于(原创)kafka及zookeeper集群设置实战的主要内容,如果未能解决你的问题,请参考以下文章

Docker实战之Kafka集群

Hadoop集群高可用及zookeeper+kafka组件搭建

实战Kafka ACL机制

Kafka消息队列大数据实战教程-第二篇(Kafka集群搭建)

Kafka消息队列大数据实战教程-第二篇(Kafka集群搭建)

Kafka消息队列大数据实战教程-第二篇(Kafka集群搭建)