SERVICE_UNAVAILABLE - 支持Kafka群集尚未完成启动;稍后再试

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了SERVICE_UNAVAILABLE - 支持Kafka群集尚未完成启动;稍后再试相关的知识,希望对你有一定的参考价值。

zookeeper0:extends:file:docker-compose-base.yml service:zookeeper container_name:zookeeper0 environment: - ZOO_MY_ID = 1 - ZOO_SERVERS = server.1 = zookeeper0:2888:3888 server.2 = zookeeper1:2888:3888 server.3 = zookeeper2:2888:3888 networks:behave:aliases: - $ {CORE_PEER_NETWORKID}

zookeeper1:
    extends:
        file: docker-compose-base.yml
        service: zookeeper
    container_name: zookeeper1
    environment:
        - ZOO_MY_ID=2
        - ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
    networks:
      behave:
         aliases:
           - ${CORE_PEER_NETWORKID}

zookeeper2:
    extends:
        file: docker-compose-base.yml
        service: zookeeper
    container_name: zookeeper2
    environment:
        - ZOO_MY_ID=3
        - ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
    networks:
      behave:
         aliases:
           - ${CORE_PEER_NETWORKID}

kafka0:
    extends:
        file: docker-compose-base.yml
        service: kafka
    container_name: kafka0
    environment:
        - KAFKA_BROKER_ID=0
        - KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
        - KAFKA_MESSAGE_MAX_BYTES=${KAFKA_MESSAGE_MAX_BYTES}
        - KAFKA_REPLICA_FETCH_MAX_BYTES=${KAFKA_REPLICA_FETCH_MAX_BYTES}
        - KAFKA_REPLICA_FETCH_RESPONSE_MAX_BYTES=${KAFKA_REPLICA_FETCH_RESPONSE_MAX_BYTES}
    depends_on:
        - zookeeper0
        - zookeeper1
        - zookeeper2
    networks:
      behave:
         aliases:
           - ${CORE_PEER_NETWORKID}

kafka1:extends:file:docker-compose-base.yml service:kafka container_name:kafka1 environment: - KAFKA_BROKER_ID = 1 - KAFKA_ZOOKEEPER_CONNECT = zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 - KAFKA_MESSAGE_MAX_BYTES = $ {KAFKA_MESSAGE_MAX_BYTES} - KAFKA_REPLICA_FETCH_MAX_BYTES = $ {KAFKA_REPLICA_FETCH_MAX_BYTES} - KAFKA_REPLICA_FETCH_RESPONSE_MAX_BYTES = $ {KAFKA_REPLICA_FETCH_RESPONSE_MAX_BYTES} depends_on: - zookeeper0 - zookeeper1 - zookeeper2 networks:behave:aliases: - $ {CORE_PEER_NETWORKID}

kafka2:extends:file:docker-compose-base.yml service:kafka container_name:kafka2 environment: - KAFKA_BROKER_ID = 2 - KAFKA_ZOOKEEPER_CONNECT = zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 - KAFKA_MESSAGE_MAX_BYTES = $ {KAFKA_MESSAGE_MAX_BYTES} - KAFKA_REPLICA_FETCH_MAX_BYTES = $ {KAFKA_REPLICA_FETCH_MAX_BYTES} - KAFKA_REPLICA_FETCH_RESPONSE_MAX_BYTES = $ {KAFKA_REPLICA_FETCH_RESPONSE_MAX_BYTES} depends_on: - zookeeper0 - zookeeper1 - zookeeper2 networks:behave:aliases: - $ {CORE_PEER_NETWORKID}

kafka3:extends:file:docker-compose-base.yml service:kafka container_name:kafka3 environment: - KAFKA_BROKER_ID = 3 - KAFKA_ZOOKEEPER_CONNECT = zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 - KAFKA_MESSAGE_MAX_BYTES = $ {KAFKA_MESSAGE_MAX_BYTES} - KAFKA_REPLICA_FETCH_MAX_BYTES = $ {KAFKA_REPLICA_FETCH_MAX_BYTES} - KAFKA_REPLICA_FETCH_RESPONSE_MAX_BYTES = $ {KAFKA_REPLICA_FETCH_RESPONSE_MAX_BYTES} depends_on: - zookeeper0 - zookeeper1 - zookeeper2 networks:behave:aliases: - $ {CORE_PEER_NETWORKID}


logs ::==orderer logs
vagrant@vagrant:~/workspace/kafka-ordering-master$ docker logs orderer0.example.com
    2019-02-15 09:08:02.775 UTC [localconfig] completeInitialization -> INFO 001 Kafka.Version unset, setting to 0.10.2.0
    2019-02-15 09:08:03.466 UTC [orderer.common.server] prettyPrintStruct -> INFO 002 Orderer config values:
            General.LedgerType = "ram"
            General.ListenAddress = "0.0.0.0"
            General.ListenPort = 7050
            General.TLS.Enabled = false
            General.TLS.PrivateKey = "/var/hyperledger/tls/server.key"
            General.TLS.Certificate = "/var/hyperledger/tls/server.crt"
            General.TLS.RootCAs = [/var/hyperledger/tls/ca.crt]
            General.TLS.ClientAuthRequired = false
            General.TLS.ClientRootCAs = []
            General.Cluster.RootCAs = [/etc/hyperledger/fabric/tls/ca.crt]
            General.Cluster.ClientCertificate = ""
            General.Cluster.ClientPrivateKey = ""
            General.Cluster.DialTimeout = 5s
            General.Cluster.RPCTimeout = 7s
            General.Cluster.ReplicationBufferSize = 20971520
            General.Cluster.ReplicationPullTimeout = 5s
            General.Cluster.ReplicationRetryTimeout = 5s
            General.Keepalive.ServerMinInterval = 1m0s
            General.Keepalive.ServerInterval = 2h0m0s
            General.Keepalive.ServerTimeout = 20s
            General.GenesisMethod = "file"
            General.GenesisProfile = "SampleInsecureKafka"
            General.SystemChannel = "test-system-channel-name"
            General.GenesisFile = "/var/hyperledger/configs/orderer.block"
            General.Profile.Enabled = false
            General.Profile.Address = "0.0.0.0:6060"
            General.LocalMSPDir = "/var/hyperledger/msp"
            General.LocalMSPID = "OrdererMSP"
            General.BCCSP.ProviderName = "SW"
            General.BCCSP.SwOpts.SecLevel = 256
            General.BCCSP.SwOpts.HashFamily = "SHA2"
            General.BCCSP.SwOpts.Ephemeral = false
            General.BCCSP.SwOpts.FileKeystore.KeyStorePath = "/var/hyperledger/msp/keystore"
            General.BCCSP.SwOpts.DummyKeystore =
            General.BCCSP.SwOpts.InmemKeystore =
            General.BCCSP.PluginOpts =
            General.Authentication.TimeWindow = 15m0s
            FileLedger.Location = "/var/hyperledger/production/orderer"
            FileLedger.Prefix = "hyperledger-fabric-ordererledger"
            RAMLedger.HistorySize = 1000
            Kafka.Retry.ShortInterval = 1s
            Kafka.Retry.ShortTotal = 30s
            Kafka.Retry.LongInterval = 5m0s
            Kafka.Retry.LongTotal = 12h0m0s
            Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
            Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
            Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
            Kafka.Retry.Metadata.RetryMax = 3
            Kafka.Retry.Metadata.RetryBackoff = 250ms
            Kafka.Retry.Producer.RetryMax = 3
            Kafka.Retry.Producer.RetryBackoff = 100ms
            Kafka.Retry.Consumer.RetryBackoff = 2s
            Kafka.Verbose = true
            Kafka.Version = 0.10.2.0
            Kafka.TLS.Enabled = false
            Kafka.TLS.PrivateKey = ""
            Kafka.TLS.Certificate = ""
            Kafka.TLS.RootCAs = []
            Kafka.TLS.ClientAuthRequired = false
            Kafka.TLS.ClientRootCAs = []
            Kafka.SASLPlain.Enabled = false
            Kafka.SASLPlain.User = ""
            Kafka.SASLPlain.Password = ""
            Kafka.Topic.ReplicationFactor = 3
            Debug.BroadcastTraceDir = ""
            Debug.DeliverTraceDir = ""
            Consensus = map[SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot WALDir:/var/hyperledger/production/orderer/etcdraft/wal]
            Operations.ListenAddress = "127.0.0.1:8443"
            Operations.TLS.Enabled = false
            Operations.TLS.PrivateKey = ""
            Operations.TLS.Certificate = ""
            Operations.TLS.RootCAs = []
            Operations.TLS.ClientAuthRequired = false
            Operations.TLS.ClientRootCAs = []
            Metrics.Provider = "disabled"
            Metrics.Statsd.Network = "udp"
            Metrics.Statsd.Address = "127.0.0.1:8125"
            Metrics.Statsd.WriteInterval = 30s
            Metrics.Statsd.Prefix = ""
    2019-02-15 09:08:03.763 UTC [orderer.consensus.kafka] newChain -> INFO 003 [channel: testchainid] Starting chain with last persisted offset -3 and last recorded block 0
    2019-02-15 09:08:03.787 UTC [orderer.commmon.multichannel] Initialize -> INFO 004 Starting system channel 'testchainid' with genesis block hash 5f2c3828df168808a899ecce5d7d7306c36cc615464ed0d54b4846155cc3979d and orderer type kafka
    2019-02-15 09:08:03.787 UTC [orderer.common.server] Start -> INFO 005 Starting orderer:
     Version: 1.4.0
     Commit SHA: d700b43
     Go version: go1.11.1
     OS/Arch: linux/amd64
    2019-02-15 09:08:03.787 UTC [orderer.common.server] Start -> INFO 006 Beginning to serve requests
    2019-02-15 09:08:03.800 UTC [orderer.consensus.kafka] setupTopicForChannel -> INFO 007 [channel: testchainid] Setting up the topic for this channel...
    2019-02-15 09:08:31.401 UTC [orderer.common.broadcast] ProcessMessage -> WARN 008 [channel: mychannel] Rejecting broadcast of message from 172.25.0.12:39536 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
    2019-02-15 09:08:31.422 UTC [comm.grpc.server] 1 -> INFO 009 streaming call completed {"grpc.start_time": "2019-02-15T09:08:31.041Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.25.0.12:39536", "grpc.code": "OK", "grpc.call_duration": "380.924298ms"}
    2019-02-15 09:08:31.453 UTC [common.deliver] Handle -> WARN 00a Error reading from 172.25.0.12:39534: rpc error: code = Canceled desc = context canceled
    2019-02-15 09:08:31.460 UTC [comm.grpc.server] 1 -> INFO 00b streaming call completed {"grpc.start_time": "2019-02-15T09:08:31.036Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.25.0.12:39534", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "423.701471ms"}
答案

它也发生在我身上。我通过以下方式解决它:运行Kitematic - >单击我的图像(在右上角) - >单击创建

以上是关于SERVICE_UNAVAILABLE - 支持Kafka群集尚未完成启动;稍后再试的主要内容,如果未能解决你的问题,请参考以下文章

SERVICE_UNAVAILABLE Google 网络工具包

K/3 Cloud移动BOS开发技巧 -- K/3 Cloud多数据中心时如何支持发布到云之家.

N10 神经网络支持向量机及K均值聚类

如何检查 Flux<Object> 是不是为空?

读书笔记│支持向量机(SVM)算法及应用

对象数组 深度复制,支持对象嵌套数组数组嵌套对象