Kafka集群部署(Docker容器的方式)
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Kafka集群部署(Docker容器的方式)相关的知识,希望对你有一定的参考价值。
参考技术A 文章主要介绍以docker容器的方式部署kafka集群。上述配置文件中的server.x,数字x对应到data/myid文件中的值。三台机器x的值分别就是1,2,3。参数详细说明请参考 官网文档 。
1.--net=host: 容器与主机共享同一Network Namespace,即容器与网络看到的是相同的网络视图(host模式存在一定的风险,对安全要求很高的生产环境最好不要用host模式,应考虑除此之外的其他几种模式)
2.-v: 指定主机到容器的目录映射关系
这样就以容器的方式启动了zookeeper的服务,可以通过 "docker exec -it zookeeper bash" 命令进入容器中进行一些操作,例如查看服务启动是否正常。也可以通过查看2181端口是否被监听判断zookeeper的服务是否运行
详细的参数配置说明请参考 官方文档 ,参数不仅可以通过上述文件的方式来配置,也可以通过容器环境变量的方式来配置,这里结合两种方式使用。
1.KAFKA_ADVERTISED_HOST_NAME、KAFKA_BROKER_ID的值要结合每台机器自身设置
2./etc/hosts文件中最好配置ip与hostname的映射关系,否则会报出如下错误" Error: Exception thrown by the agent : java.net.MalformedURLException: Local host name unknown: java.net.UnknownHostException: node0: node0: System error "
3.通过-e 指定的环境变量与在server.properties中配置的选项其效果是一样的
4.配置文件中的选项若要通过环境变量来指定,方式为:如broker.id对应KAFKA_BROKER_ID,类似的log.dirs对应KAFKA_LOG_DIRS
5.KAFKA_HEAP_OPTS="-Xmx6G -Xms6G"指java堆内存大小的设置,6G大小是kafka官网给出的数值,此数值要结合机器的内存大小给出。超过6G的内存,可以设置为6G;若机器的内存低于6G而设置6G,则会报错。
5.启动成功后,可以通过"docker logs kafka"命令查看日志
1.ZK_HOSTS:ZooKeeper访问地址(需指定机器的ip,localhost:2181或127.0.0.1:2181均会报 "java.net.ConnectException: Connection refused" 异常)
Kafka使用容器方式部署时客户端连接需要注意的事项
最近在用Docker部署Kafka集群,很多连接需求时需要考虑的,老外的一篇文总结的很到位,但还是有写特别的地方需要注意,以下是原文转载(读懂应该没有问题)。
When a client wants to send or receive a message from Apache Kafka®, there are two types of connection that must succeed:
The initial connection to a broker (the bootstrap). This returns metadata to the client, including a list of all the brokers in the cluster and their connection endpoints.
The client then connects to one (or more) of the brokers returned in the first step as required. If the broker has not been configured correctly, the connections will fail.
What sometimes happens is that people focus on only step 1 above, and get caught out by step 2. The broker details returned in step 1 are defined by the advertised.listeners setting of the broker(s) and must be resolvable and accessible from the client machine.
To read more about the protocol, see the docs, as well as this previous article that I wrote. If the nuts and bolts of the protocol are the last thing you’re interested in and you just want to write applications with Kafka you should check out Confluent Cloud. It’s a fully managed Apache Kafka service in the cloud, with not an advertised.listeners configuration for you to worry about in sight!
Below, I use a client connecting to Kafka in various permutations of deployment topology. It’s written using Python with librdkafka (confluent_kafka), but the principle applies to clients across all languages. You can find the code on GitHub. It’s very simple and just serves to illustrate the connection process. It’s simplified for clarity, at the expense of good coding and functionality
以上是关于Kafka集群部署(Docker容器的方式)的主要内容,如果未能解决你的问题,请参考以下文章