kafka启动报错(转)

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了kafka启动报错(转)相关的知识,希望对你有一定的参考价值。

参考技术A 原文链接:https://blog.csdn.net/russle/java/article/details/84962568

报错

C:\H\software\kafka_2.12-1.1.1\bin\windows>kafka-server-start.bat ..\..\config\

server.properties

[2018-12-11 21:13:46,490] INFO Registered kafka:type=kafka.Log4jController MBean

(kafka.utils.Log4jControllerRegistration$)

[2018-12-11 21:13:46,998] ERROR Exiting Kafka due to fatal exception (kafka.Kafk

a$)

java.lang.VerifyError: Uninitialized object exists on backward branch 209

Exception Details:

  Location:

    scala/collection/immutable/HashMap$HashTrieMap.split()Lscala/collection/immu

table/Seq; @249: goto

  Reason:

    Error exists in the bytecode

  Bytecode:

    0000000: 2ab6 0060 04a0 001e b200 b8b2 00bd 04bd

    0000010: 0002 5903 2a53 c000 bfb6 00c3 b600 c7c0

    ......

    0000170: b200 bd05 bd00 0259 0319 0b53 5904 190c

    0000180: 53c0 00bf b600 c3b6 0102 b02a b600 3803

    0000190: 32b6 0104 b0

  Stackmap Table:

    same_frame(@35)

    full_frame(@141,Object[#2],Integer,Integer,Integer,Integer,Integer,Object[#

114],)

    append_frame(@151,Object[#134],Object[#134])

    full_frame(@209,Object[#2],Integer,Integer,Integer,Integer,Integer,Object[#

......

解决办法 升级jdk

将1.8.0_11 升级为 1.8.0_251

服务器上Kafka启动报错:error=‘Cannot allocate memory‘ (errno=12)

解决问题思路:大问题拆小问题。从源头(Kafka有无启动成功)开始测试,是在哪一步出了问题。

正面例子:从服务器上部署的项目的log_error.log中发现是Kafka出现了问题->立刻测试Kafka有无启动成功->没有就后台启动转为前台启动,方便查看报错信息->通过报错信息解决问题。

反面例子:直接百度搜索log_error.log中的报错信息来找解决方案,流程中间可能报错的地方多,错误范围太大了。而且网上记录的解决方案都是往集群方向上解读,我又不熟这个。别人写什么,我试什么,不停的启动关闭Kafka,很浪费时间。

环境

阿里云服务器、CentOS 7.9.2009。压缩包安装的Kafka。

# 安装Kafka
[root@wu1 ~]# tar -zvxf kafka_2.13-2.7.0.tgz -C /opt

经历如下弯路才查看到报错信息

把项目部署到服务器上时,点赞操作时,代码中用到了Kafka,/tmp/community/log_error.log中产生如下错误。

2021-05-17 12:16:52,666 ERROR [http-nio-8080-exec-2] o.s.k.s.LoggingProducerListener [LogAccessor.java:261] Exception thrown when sending a message with key='null' and payload='{"data":{"postId":277},"entityId":277,"entityType":1,"entityUserId":149,"topic":"like","userId":111}' to topic like:
org.apache.kafka.common.errors.TimeoutException: Topic like not present in metadata after 60000 ms.
2021-05-17 12:16:52,669 ERROR [http-nio-8080-exec-2] c.n.c.c.a.ExceptionAdvice [ExceptionAdvice.java:22] 服务器发生异常: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic like not present in metadata after 60000 ms.

我百度报错信息,查到了这样的解决方法:在/opt/kafka_2.13-2.7.0/config/server.properties加上

broker.id=0
port=9092
host.name={服务器内网ip}
advertised.host.name={服务器外网ip}

但重启后,仍然报该错。

# 通过kill KafkaPID的方式关闭Kafka。然后先启动zookeeper(因为我用kill命令关闭Kafka时,Zookeeper也关闭了),再启动Kafka
[root@wu1 ~]# /opt/kafka_2.13-2.7.0/bin/zookeeper-server-start.sh -daemon /opt/kafka_2.13-2.7.0/config/zookeeper.properties 	# 后台启动zookeeper
[root@wu1 ~]# nohup /opt/kafka_2.13-2.7.0/bin/kafka-server-start.sh /opt/kafka_2.13-2.7.0/config/server.properties 1>/dev/null 2>&1 &	# 后台启动Kafka
[1] 4408

后面反应过来,试了一下Kafka有没有启动成功。发现Kafka没有启动成功。

[root@wu1 ~]# /opt/kafka_2.13-2.7.0/bin/kafka-topics.sh --list --bootstrap-server localhost:9092	# 没报错就说明启动成功
[2021-05-17 19:30:34,681] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-05-17 19:30:34,787] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
# 重复了多次该WARN,直到我Ctrl+C才终止。

再次后台启动Kafka,发现Kafka异常退出。

[root@wu1 ~]# nohup /opt/kafka_2.13-2.7.0/bin/kafka-server-start.sh /opt/kafka_2.13-2.7.0/config/server.properties 1>/dev/null 2>&1 &	# 后台启动Kafka
[1] 10818
[root@wu1 ~]# ps -ef |grep kafka
root     11962  6522  0 19:40 pts/1    00:00:00 grep --color=auto kafka
[1]+  Exit 1                  nohup /opt/kafka_2.13-2.7.0/bin/kafka-server-start.sh /opt/kafka_2.13-2.7.0/config/server.properties > /dev/null 2>&1	# 意思是启动Kafka时异常退出了Kafka。想想C语言中的exit(1)。用命令shutdown.sh来关闭tomcat时,也报了这句话,但当时我没在意。

于是转为前台方式启动Kafka,方便查看报错信息。

通过这种方式,发现,原来是因为内存不足,所以Kafka无法启动。

[root@wu1 ~]# /opt/kafka_2.13-2.7.0/bin/kafka-server-start.sh /opt/kafka_2.13-2.7.0/config/server.properties
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /root/hs_err_pid2857.log

解决方法

1、kill一些不用的进程,来腾出内存。

# kill一些不用的进程,来腾出内存
[root@wu1 ~]# ps aux|head -1;ps aux|grep -v PID|sort -rn -k +4|head	# 取进程占用内存(MEM)最高的前10个进程
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root     19093  1.1 18.4 3660052 689448 ?      Sl   08:57   3:47 /usr/java/jdk1.8.0_281-amd64/bin/java -Djava.util.logging.config.file=/www/server/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -classpath /www/server/tomcat/bin/bootstrap.jar:/www/server/tomcat/bin/tomcat-juli.jar -Dcatalina.base=/www/server/tomcat -Dcatalina.home=/www/server/tomcat -Djava.io.tmpdir=/www/server/tomcat/temp org.apache.catalina.startup.Bootstrap start
root     29554  2.5 17.2 3690736 644352 pts/5  Sl   13:37   1:03 /usr/java/jdk1.8.0_281-amd64/bin/java -Djava.util.logging.config.file=/www/server/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -classpath /www/server/tomcat/bin/bootstrap.jar:/www/server/tomcat/bin/tomcat-juli.jar -Dcatalina.base=/www/server/tomcat -Dcatalina.home=/www/server/tomcat -Djava.io.tmpdir=/www/server/tomcat/temp org.apache.catalina.startup.Bootstrap start
nowcode+ 10025  0.1 17.2 3167672 642680 ?      Sl   Apr29  36:06 /usr/java/jdk1.8.0_281-amd64/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=SPI,JRE -Xms256m -Xmx512m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.io.tmpdir=/tmp/elasticsearch-2476432088821252700 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -XX:MaxDirectMemorySize=268435456 -Des.path.home=/opt/elasticsearch-7.12.0 -Des.path.conf=/opt/elasticsearch-7.12.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -Des.bundled_jdk=true -cp /opt/elasticsearch-7.12.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
mysql     3802  0.2 10.0 1812572 373940 ?      Sl   10:16   0:32 /www/server/mysql/bin/mysqld --basedir=/www/server/mysql --datadir=/www/server/data --plugin-dir=/www/server/mysql/lib/plugin --user=mysql --log-error=wu1.err --open-files-limit=65535 --pid-file=/www/server/data/wu1.pid --socket=/tmp/mysql.sock --port=3306
root     28511  0.0  1.9 3059008 71064 pts/5   Sl   13:31   0:02 /usr/java/jdk1.8.0_281-amd64/bin/java -Xmx512M -Xms512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true -Xloggc:/opt/kafka_2.13-2.7.0/bin/../logs/zookeeper-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/opt/kafka_2.13-2.7.0/bin/../logs -Dlog4j.configuration=file:/opt/kafka_2.13-2.7.0/bin/../config/log4j.properties -cp %JAVA_HOME%/lib:%JAVA_HOME%/jre/lib:/opt/kafka_2.13-2.7.0/bin/../libs/activation-1.1.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/commons-cli-1.4.jar:/opt/kafka_2.13-2.7.0/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/connect-api-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/connect-basic-auth-extension-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/connect-file-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/connect-json-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/connect-mirror-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/connect-mirror-client-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/connect-runtime-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/connect-transforms-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/hk2-api-2.6.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/hk2-locator-2.6.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/hk2-utils-2.6.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jackson-annotations-2.10.5.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jackson-core-2.10.5.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jackson-databind-2.10.5.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jackson-module-paranamer-2.10.5.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jackson-module-scala_2.13-2.10.5.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jakarta.inject-2.6.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/kafka_2.13-2.7.0/bin/../libs/javassist-3.25.0-GA.jar:/opt/kafka_2.13-2.7.0/bin/../libs/javassist-3.26.0-GA.jar:/opt/kafka_2.13-2.7.0/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jersey-client-2.31.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jersey-common-2.31.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jersey-container-servlet-2.31.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jersey-container-servlet-core-2.31.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jersey-hk2-2.31.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jersey-media-jaxb-2.31.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jersey-server-2.31.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jetty-client-9.4.33.v20201020.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jetty-continuation-9.4.33.v20201020.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jetty-http-9.4.33.v20201020.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jetty-io-9.4.33.v20201020.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jetty-security-9.4.33.v20201020.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jetty-server-9.4.33.v20201020.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jetty-servlet-9.4.33.v20201020.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jetty-servlets-9.4.33.v20201020.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jetty-util-9.4.33.v20201020.jar:/opt/kafka_2.13-2.7.0/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka_2.13-2.7.0/bin/../libs/kafka_2.13-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/kafka_2.13-2.7.0-sources.jar:/opt/kafka_2.13-2.7.0/bin/../libs/kafka-clients-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/kafka-log4j-appender-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/kafka-raft-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/kafka-streams-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/kafka-streams-examples-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/kafka-streams-scala_2.13-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/kafka-streams-test-utils-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/kafka-tools-2.7.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/log4j-1.2.17.jar:/opt/kafka_2.13-2.7.0/bin/../libs/lz4-java-1.7.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/maven-artifact-3.6.3.jar:/opt/kafka_2.13-2.7.0/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/netty-buffer-4.1.51.Final.jar:/opt/kafka_2.13-2.7.0/bin/../libs/netty-codec-4.1.51.Final.jar:/opt/kafka_2.13-2.7.0/bin/../libs/netty-common-4.1.51.Final.jar:/opt/kafka_2.13-2.7.0/bin/../libs/netty-handler-4.1.51.Final.jar:/opt/kafka_2.13-2.7.0/bin/../libs/netty-resolver-4.1.51.Final.jar:/opt/kafka_2.13-2.7.0/bin/../libs/netty-transport-4.1.51.Final.jar:/opt/kafka_2.13-2.7.0/bin/../libs/netty-transport-native-epoll-4.1.51.Final.jar:/opt/kafka_2.13-2.7.0/bin/../libs/netty-transport-native-unix-common-4.1.51.Final.jar:/opt/kafka_2.13-2.7.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/opt/kafka_2.13-2.7.0/bin/../libs/paranamer-2.8.jar:/opt/kafka_2.13-2.7.0/bin/../libs/plexus-utils-3.2.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/reflections-0.9.12.jar:/opt/kafka_2.13-2.7.0/bin/../libs/rocksdbjni-5.18.4.jar:/opt/kafka_2.13-2.7.0/bin/../libs/scala-collection-compat_2.13-2.2.0.jar:/opt/kafka_2.13-2.7.0/bin/../libs/scala-java8-compat_2.13-0.9.1.jar:/opt/kafka_2.13-2.7.0/bin/../libs/scala-library-2.13.3.jar:/opt/kafka_2.13-2.7.0/bin/../libs/scala-logging_2.13-3.9.2.jar:/opt/kafka_2.13-2.7.0/bin/../libs/scala-reflect-2.13.3.jar:/opt/kafka_2.13-2.7.0/bin/../libs/slf4j-api-1.7.30.jar:/opt/kafka_2.13-2.7.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/kafka_2.13-2.7.0/bin/../libs/snappy-java-1.1.7.7.jar:/opt/kafka_2.13-2.7.0/bin/../libs/zookeeper-3.5.8.jar:/opt/kafka_2.13-2.7.0/bin/../libs/zookeeper-jute-3.5.8.jar:/opt/kafka_2.13-2.7.0/bin/../libs/zstd-jni-1.4.5-6.jar org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/kafka_2.13-2.7.0/config/zookeeper.properties
root     25609  0.0  1.7 215712 66392 ?        Sl   Apr16   6:09 /usr/bin/fdfs_storaged /etc/fdfs/storage.conf
root      1355  0.2  1.6 566104 63368 ?        Sl   Mar12 217:53 /www/server/panel/pyenv/bin/python /www/server/panel/BT-Panel
root     16895  0.0  1.3 601708 50120 ?        Ssl  Mar19   8:44 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root      1329  0.1  1.1 869068 41168 ?        Sl   Mar12  99:14 /www/server/panel/pyenv/bin/python /www/server/panel/BT-Task
root     25246  0.0  0.8 363056 31288 ?        Ssl  Mar18   0:16 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid
[root@wu1 ~]# kill 25609

2、修改默认配置,减少软件启动需要的内存

ElasticSearch

[root@wu1 ~]# vim /opt/elasticsearch-7.12.0/config/jvm.options	# 取消下方内容的注释,修改以下内容,这两个参数要顶格写。
-Xms256m
-Xmx512m

Kafka
要修改的配置文件:/opt/kafka_2.13-2.7.0/bin/kafka-server-start.sh

原本是:export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
修改为:export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"

Zookeeper
要修改的配置文件:/opt/kafka_2.13-2.7.0/bin/zookeeper-server-start.sh

原本是:export KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"

启动成功

[root@wu1 ~]# /opt/kafka_2.13-2.7.0/bin/kafka-server-start.sh /opt/kafka_2.13-2.7.0/config/server.properties
[2021-05-17 16:28:24,132] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-05-17 16:28:24,563] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-05-17 16:28:24,629] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2021-05-17 16:28:24,635] INFO starting (kafka.server.KafkaServer)
[2021-05-17 16:28:24,636] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2021-05-17 16:28:24,665] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2021-05-17 16:28:24,672] INFO Client environment:zookeeper.version=3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/04/2020 15:53 GMT (org.apache.zookeeper.ZooKeeper)
[2021-05-17 16:28:24,672] INFO Client environment:host.name=wu1 (org.apache.zookeeper.ZooKeeper)
[2021-05-17 16:28:24,672] INFO Client environment:java.version=1.8.0_281 (org.apache.zookeeper.ZooKeeper)
。。。。

其他

我是因为在安装Kafka时,能够成功启动Kafka,就是能输入下方命令不报错。

[root@wu1 ~]# /opt/kafka_2.13-2.7.0/bin/kafka-topics.sh --list --bootstrap-server localhost:9092	# 没报错就说明启动成功

没有想到为什么Kafka再启动时会因为内存不够启动不起来。

老师视频课上光说需要2核4G的服务器才能跑起来,没说这个跑起来,是在修改默认配置,减少软件启动需要的内存后,才能跑起来,他视频课上没演示,我Kafka跑不起来,百度了日志报错信息,还以为是我哪里配置出问题了,没想到是内存不够,Kafka就没有启动。坑到我了。

参考

经过这篇文提醒,我才想起来要从后台启动Kafka转为前台启动Kafka,方便查看报错信息:kafka启动后闪退-已解决 - Lucky小黄人_ - 博客园

以上是关于kafka启动报错(转)的主要内容,如果未能解决你的问题,请参考以下文章

启动kafka报错

启动zookeeper和kafka时kafka报错或闪退一直无法启动

启动zookeeper和kafka时kafka报错或闪退一直无法启动

kafka启动报错:另一个程序正在使用此文件,进程无法访问。

kafka 启动 报错cannot allocate memory,即内存不足

kafka启动报错:kafka.common.KafkaException: Failed to acquire lock on file .lock