将 sprig 连接到 kafka 开始使用 docker compose 进行 localhost 开发
Posted
技术标签:
【中文标题】将 sprig 连接到 kafka 开始使用 docker compose 进行 localhost 开发【英文标题】:connecting sprig to kafka started using docker compose for localhost development 【发布时间】:2020-07-19 00:59:07 【问题描述】:经过数小时试图找到解决此问题的方法后,我决定将其发布在这里。
我有以下 docker-compose 启动 zookeper、2 个 kafka 代理和 kafdrop
version: '2'
networks:
kafka-net:
driver: bridge
services:
zookeeper-server:
image: 'bitnami/zookeeper:latest'
networks:
- kafka-net
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka-broker-1:
image: 'bitnami/kafka:latest'
networks:
- kafka-net
ports:
- '9092:9092'
- '29092:29092'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- KAFKA_CFG_ADVERTISED_LISTENERS=INSIDE://kafka-broker-1:9092,OUTSIDE://localhost:29092
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
- KAFKA_CFG_LISTENERS=INSIDE://:9092,OUTSIDE://:29092
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INSIDE
- KAFKA_CFG_BROKER_ID=1
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper-server
kafka-broker-2:
image: 'bitnami/kafka:latest'
networks:
- kafka-net
ports:
- '9093:9092'
- '29093:29092'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- KAFKA_CFG_ADVERTISED_LISTENERS=INSIDE://kafka-broker-2:9093,OUTSIDE://localhost:29093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
- KAFKA_CFG_LISTENERS=INSIDE://:9093,OUTSIDE://:29093
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INSIDE
- KAFKA_CFG_BROKER_ID=2
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper-server
kafdrop-web:
image: obsidiandynamics/kafdrop
networks:
- kafka-net
ports:
- '9000:9000'
environment:
- KAFKA_BROKERCONNECT=kafka-broker-1:9092,kafka-broker-2:9093
depends_on:
- kafka-broker-1
- kafka-broker-2
然后我尝试将我的 Spring 命令行应用程序连接到 Kafka。我从 spring kafka docs 中获取了快速入门示例。
属性文件
spring.kafka.bootstrap-servers=localhost:29092,localhost:29093
spring.kafka.consumer.group-id=cg1
spring.kafka.consumer.auto-offset-reset=earliest
logging.level.org.springframework.kafka=debug
logging.level.org.apache.kafka=debug
还有应用代码
@SpringBootApplication
public class App
public static void main(String[] args)
SpringApplication.run(App.class, args);
@Component
public class Runner implements CommandLineRunner
public static Logger logger = LoggerFactory.getLogger(Runner.class);
@Autowired
private KafkaTemplate<String, String> template;
private final CountDownLatch latch = new CountDownLatch(3);
@Override
public void run(String... args) throws Exception
this.template.send("myTopic", "foo1");
this.template.send("myTopic", "foo2");
this.template.send("myTopic", "foo3");
latch.await(60, TimeUnit.SECONDS);
logger.info("All received");
@KafkaListener(topics = "myTopic")
public void listen(ConsumerRecord<?, ?> cr) throws Exception
logger.info(cr.toString());
latch.countDown();
我运行应用程序时得到的日志
2020-04-07 01:06:45.074 DEBUG 7560 --- [ main] KafkaListenerAnnotationBeanPostProcessor : 1 @KafkaListener methods processed on bean 'runner': public void com.vdt.learningkafka.Runner.listen(org.apache.kafka.clients.consumer.ConsumerRecord) throws java.lang.Exception=[@org.springframework.kafka.annotation.KafkaListener(autoStartup=, beanRef=__listener, clientIdPrefix=, concurrency=, containerFactory=, containerGroup=, errorHandler=, groupId=, id=, idIsGroup=true, properties=[], splitIterables=true, topicPartitions=[], topicPattern=, topics=[myTopic])]
2020-04-07 01:06:45.284 INFO 7560 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [localhost:29092, localhost:29093]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
2020-04-07 01:06:45.305 DEBUG 7560 --- [ main] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=adminclient-1] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:29093 (id: -2 rack: null), localhost:29092 (id: -1 rack: null)], partitions = [], controller = null).
2020-04-07 01:06:45.411 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-closed:
2020-04-07 01:06:45.413 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-created:
2020-04-07 01:06:45.414 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-authentication:
2020-04-07 01:06:45.414 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-reauthentication:
2020-04-07 01:06:45.415 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-authentication-no-reauth:
2020-04-07 01:06:45.415 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name failed-authentication:
2020-04-07 01:06:45.415 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name failed-reauthentication:
2020-04-07 01:06:45.415 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name reauthentication-latency:
2020-04-07 01:06:45.416 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent-received:
2020-04-07 01:06:45.416 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent:
2020-04-07 01:06:45.417 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-received:
2020-04-07 01:06:45.418 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name select-time:
2020-04-07 01:06:45.419 DEBUG 7560 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name io-time:
2020-04-07 01:06:45.428 INFO 7560 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.3.1
2020-04-07 01:06:45.428 INFO 7560 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 18a913733fb71c01
2020-04-07 01:06:45.428 INFO 7560 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1586210805426
2020-04-07 01:06:45.430 DEBUG 7560 --- [ main] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Kafka admin client initialized
2020-04-07 01:06:45.433 DEBUG 7560 --- [ main] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1586210925432) with a timeout 120000 ms from now.
2020-04-07 01:06:45.434 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29093 (id: -2 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.442 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--2.bytes-sent
2020-04-07 01:06:45.443 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--2.bytes-received
2020-04-07 01:06:45.443 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--2.latency
2020-04-07 01:06:45.445 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -2
2020-04-07 01:06:45.569 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Completed connection to node -2. Fetching API versions.
2020-04-07 01:06:45.569 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -2.
2020-04-07 01:06:45.576 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:483) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152) ~[kafka-clients-2.3.1.jar:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
2020-04-07 01:06:45.577 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Node -2 disconnected.
2020-04-07 01:06:45.578 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29092 (id: -1 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.578 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.bytes-sent
2020-04-07 01:06:45.579 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.bytes-received
2020-04-07 01:06:45.579 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.latency
2020-04-07 01:06:45.580 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
2020-04-07 01:06:45.580 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions.
2020-04-07 01:06:45.580 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1.
2020-04-07 01:06:45.586 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Recorded API versions for node -1: (Produce(0): 0 to 8 [usable: 7], Fetch(1): 0 to 11 [usable: 11], ListOffsets(2): 0 to 5 [usable: 5], Metadata(3): 0 to 9 [usable: 8], LeaderAndIsr(4): 0 to 4 [usable: 2], StopReplica(5): 0 to 2 [usable: 1], UpdateMetadata(6): 0 to 6 [usable: 5], ControlledShutdown(7): 0 to 3 [usable: 2], OffsetCommit(8): 0 to 8 [usable: 7], OffsetFetch(9): 0 to 6 [usable: 5], FindCoordinator(10): 0 to 3 [usable: 2], JoinGroup(11): 0 to 6 [usable: 5], Heartbeat(12): 0 to 4 [usable: 3], LeaveGroup(13): 0 to 4 [usable: 2], SyncGroup(14): 0 to 4 [usable: 3], DescribeGroups(15): 0 to 5 [usable: 3], ListGroups(16): 0 to 3 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 2], CreateTopics(19): 0 to 5 [usable: 3], DeleteTopics(20): 0 to 4 [usable: 3], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 2 [usable: 1], OffsetForLeaderEpoch(23): 0 to 3 [usable: 3], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 2 [usable: 2], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 to 1 [usable: 1], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 2 [usable: 1], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 1], ElectPreferredLeaders(43): 0 to 2 [usable: 0], IncrementalAlterConfigs(44): 0 to 1 [usable: 0], UNKNOWN(45): 0, UNKNOWN(46): 0, UNKNOWN(47): 0)
2020-04-07 01:06:45.594 DEBUG 7560 --- [| adminclient-1] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=adminclient-1] Updating cluster metadata to Cluster(id = gSypiCeoSlyuSR4ks5qwwA, nodes = [localhost:29092 (id: 1 rack: null), localhost:29093 (id: 2 rack: null)], partitions = [], controller = localhost:29093 (id: 2 rack: null))
2020-04-07 01:06:45.595 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29093 (id: 2 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.596 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-2.bytes-sent
2020-04-07 01:06:45.597 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-2.bytes-received
2020-04-07 01:06:45.597 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-2.latency
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Completed connection to node 2. Fetching API versions.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node 2.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:483) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152) ~[kafka-clients-2.3.1.jar:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Node 2 disconnected.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=adminclient-1] Requesting metadata update.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29092 (id: 1 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.599 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-1.bytes-sent
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-1.bytes-received
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node-1.latency
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Completed connection to node 1. Fetching API versions.
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node 1.
2020-04-07 01:06:45.602 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Recorded API versions for node 1: (Produce(0): 0 to 8 [usable: 7], Fetch(1): 0 to 11 [usable: 11], ListOffsets(2): 0 to 5 [usable: 5], Metadata(3): 0 to 9 [usable: 8], LeaderAndIsr(4): 0 to 4 [usable: 2], StopReplica(5): 0 to 2 [usable: 1], UpdateMetadata(6): 0 to 6 [usable: 5], ControlledShutdown(7): 0 to 3 [usable: 2], OffsetCommit(8): 0 to 8 [usable: 7], OffsetFetch(9): 0 to 6 [usable: 5], FindCoordinator(10): 0 to 3 [usable: 2], JoinGroup(11): 0 to 6 [usable: 5], Heartbeat(12): 0 to 4 [usable: 3], LeaveGroup(13): 0 to 4 [usable: 2], SyncGroup(14): 0 to 4 [usable: 3], DescribeGroups(15): 0 to 5 [usable: 3], ListGroups(16): 0 to 3 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 2], CreateTopics(19): 0 to 5 [usable: 3], DeleteTopics(20): 0 to 4 [usable: 3], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 2 [usable: 1], OffsetForLeaderEpoch(23): 0 to 3 [usable: 3], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 2 [usable: 2], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 to 1 [usable: 1], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 2 [usable: 1], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 1], ElectPreferredLeaders(43): 0 to 2 [usable: 0], IncrementalAlterConfigs(44): 0 to 1 [usable: 0], UNKNOWN(45): 0, UNKNOWN(46): 0, UNKNOWN(47): 0)
2020-04-07 01:06:45.605 DEBUG 7560 --- [| adminclient-1] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=adminclient-1] Updating cluster metadata to Cluster(id = gSypiCeoSlyuSR4ks5qwwA, nodes = [localhost:29092 (id: 1 rack: null), localhost:29093 (id: 2 rack: null)], partitions = [], controller = localhost:29093 (id: 2 rack: null))
2020-04-07 01:06:45.639 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29093 (id: 2 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.640 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2
2020-04-07 01:06:45.640 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Completed connection to node 2. Fetching API versions.
2020-04-07 01:06:45.640 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node 2.
2020-04-07 01:06:45.641 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector : [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:483) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152) ~[kafka-clients-2.3.1.jar:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
docker 的配置似乎很好,因为 Kafdrop 成功连接了代理,而且根据我从日志中获得的信息,spring 应用程序设法连接到代理,但之后连接立即关闭。
【问题讨论】:
【参考方案1】:要将基于 Docker 的应用程序与同样在 Docker 容器内运行的 Kafka 连接,您必须使用 Kafka 引用容器的名称(其地址的别名):
spring.kafka.bootstrap-servers=kafka-broker-1:29092,kafka-broker-2:29093
如果您尝试将 Docker 容器内的应用程序连接到 localhost:29092
,它会尝试连接到同一个容器的本地主机,而不是外部网络。
由于 Kafdrop 成功连接代理,来自 docker 的配置似乎很好
是的,检查docker-compose.yml
中的kafdrop-web
,看看它是如何连接到Kafka 代理的。这里使用了容器的名称:
KAFKA_BROKERCONNECT=kafka-broker-1:9092,kafka-broker-2:9093
【讨论】:
我没有在 docker 容器中运行应用程序,它在我的开发机器上,我只使用 docker 来设置环境。 Kafdrop 之所以有效,是因为它在 docker 内部运行,但我的应用程序位于 localhost 上。以上是关于将 sprig 连接到 kafka 开始使用 docker compose 进行 localhost 开发的主要内容,如果未能解决你的问题,请参考以下文章
Spring Boot App 拒绝连接到 Kafka 代理
从 Docker 容器将 PySpark 连接到 Kafka
Kafka TestContainer 尝试连接到错误的地址