kafak-python使用补充

Posted 沧海一粟,何以久远

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了kafak-python使用补充相关的知识,希望对你有一定的参考价值。

 

  1. kafka-python的心跳报文使用的是一个独立的线程,以固定的时间(heartbeat_interval_ms,默认是3000ms)间隔发生心跳信息
  2. member_id唯一标识一个客户端的consumer
  3. 使用group模式下,在一个consumer连接的过程中,如果遇到有其他consumer加入或者退出同一个group,会触发group的rebalance操作,但是之前已经连接的consumer的member_id仍然保持不变,这样就保证了之前已经连接的consumer的稳定性
  4. consumer断开后重连会获取新的member_id
  5. rebalance期间,服务不可用
  6. 心跳的作用:一个是保活,保持当前会话持续可用;二个作用是当服务端有group rebalance信息的时候,可以及时获取这些信息
  7. 可以把django的所有日志都输出,这样就可以看到kafka-python和kafka服务器的交互过程的debug信息,进一步了解交互过程

 

 

举个例子:

kafka的topic=test group=xx有两个partition分区,如果只有一个consumer A,那么这个consumer A必须同时消费这两个分区;如果此时加入另一个consumer B,就可以分担一个分区的信息;如果后加入的consumer B退出这个group xx,剩余的consumer A又要消费这两个partition;同样如果consumer A停止消费,那么consumer B一样需要消费两个分区。而且一个分区只能被一个consumer消费一次。

 

配置信息:

 

"""Consume records from a Kafka cluster.

    The consumer will transparently handle the failure of servers in the Kafka
    cluster, and adapt as topic-partitions are created or migrate between
    brokers. It also interacts with the assigned kafka Group Coordinator node
    to allow multiple consumers to load balance consumption of topics (requires
    kafka >= 0.9.0.0).

    The consumer is not thread safe and should not be shared across threads. 不是线程安全的

    Arguments:
        *topics (str): optional list of topics to subscribe to. If not set,
            call :meth:`~kafka.KafkaConsumer.subscribe` or
            :meth:`~kafka.KafkaConsumer.assign` before consuming records.

    Keyword Arguments:
        bootstrap_servers: host[:port] string (or list of host[:port]
            strings) that the consumer should contact to bootstrap initial
            cluster metadata. This does not have to be the full node list.
            It just needs to have at least one broker that will respond to a
            Metadata API Request. Default port is 9092. If no servers are
            specified, will default to localhost:9092.
        client_id (str): A name for this client. This string is passed in
            each request to servers and can be used to identify specific
            server-side log entries that correspond to this client. Also
            submitted to GroupCoordinator for logging with respect to
            consumer group administration. Default: kafka-python-{version}
        group_id (str or None): The name of the consumer group to join for dynamic
            partition assignment (if enabled), and to use for fetching and
            committing offsets. If None, auto-partition assignment (via
            group coordinator) and offset commits are disabled.
            Default: None   使用subscribe必须使用这个参数
        key_deserializer (callable): Any callable that takes a
            raw message key and returns a deserialized key.
        value_deserializer (callable): Any callable that takes a
            raw message value and returns a deserialized value.
        fetch_min_bytes (int): Minimum amount of data the server should
            return for a fetch request, otherwise wait up to
            fetch_max_wait_ms for more data to accumulate. Default: 1.
        fetch_max_wait_ms (int): The maximum amount of time in milliseconds
            the server will block before answering the fetch request if
            there isnt sufficient data to immediately satisfy the
            requirement given by fetch_min_bytes. Default: 500.  即使fetch_min_bytes未满足,但是fetch_max_wait_ms时间到了,也会立即返回
        fetch_max_bytes (int): The maximum amount of data the server should
            return for a fetch request. This is not an absolute maximum, if the
            first message in the first non-empty partition of the fetch is
            larger than this value, the message will still be returned to
            ensure that the consumer can make progress. NOTE: consumer performs
            fetches to multiple brokers in parallel so memory usage will depend
            on the number of brokers containing partitions for the topic.
            Supported Kafka version >= 0.10.1.0. Default: 52428800 (50 MB).
        max_partition_fetch_bytes (int): The maximum amount of data
            per-partition the server will return. The maximum total memory
            used for a request = #partitions * max_partition_fetch_bytes.
            This size must be at least as large as the maximum message size
            the server allows or else it is possible for the producer to
            send messages larger than the consumer can fetch. If that
            happens, the consumer can get stuck trying to fetch a large
            message on a certain partition. Default: 1048576.
        request_timeout_ms (int): Client request timeout in milliseconds.
            Default: 305000.
        retry_backoff_ms (int): Milliseconds to backoff when retrying on
            errors. Default: 100.
        reconnect_backoff_ms (int): The amount of time in milliseconds to
            wait before attempting to reconnect to a given host.
            Default: 50.        举例:backoff可能是这样一种机制:第一次连接失败,就等一段时间,如果失败,就延长等待时间再尝试,如果再失败,再延长等待时间
        reconnect_backoff_max_ms (int): The maximum amount of time in
            milliseconds to wait when reconnecting to a broker that has
            repeatedly failed to connect. If provided, the backoff per host
            will increase exponentially for each consecutive connection
            failure, up to this maximum. To avoid connection storms, a
            randomization factor of 0.2 will be applied to the backoff
            resulting in a random range between 20% below and 20% above
            the computed value. Default: 1000.
        max_in_flight_requests_per_connection (int): Requests are pipelined
            to kafka brokers up to this number of maximum requests per
            broker connection. Default: 5.
        auto_offset_reset (str): A policy for resetting offsets on
            OffsetOutOfRange errors: earliest will move to the oldest
            available message, latest will move to the most recent. Any
            other value will raise the exception. Default: latest.
        enable_auto_commit (bool): If True , the consumers offset will be
            periodically committed in the background. Default: True.
        auto_commit_interval_ms (int): Number of milliseconds between automatic
            offset commits, if enable_auto_commit is True. Default: 5000.
        default_offset_commit_callback (callable): Called as
            callback(offsets, response) response will be either an Exception
            or an OffsetCommitResponse struct. This callback can be used to
            trigger custom actions when a commit request completes.
        check_crcs (bool): Automatically check the CRC32 of the records
            consumed. This ensures no on-the-wire or on-disk corruption to
            the messages occurred. This check adds some overhead, so it may
            be disabled in cases seeking extreme performance. Default: True
        metadata_max_age_ms (int): The period of time in milliseconds after
            which we force a refresh of metadata, even if we havent seen any
            partition leadership changes to proactively discover any new
            brokers or partitions. Default: 300000
        partition_assignment_strategy (list): List of objects to use to
            distribute partition ownership amongst consumer instances when
            group management is used.
            Default: [RangePartitionAssignor, RoundRobinPartitionAssignor]
        max_poll_records (int): The maximum number of records returned in a
            single call to :meth:`~kafka.KafkaConsumer.poll`. Default: 500
        max_poll_interval_ms (int): The maximum delay between invocations of
            :meth:`~kafka.KafkaConsumer.poll` when using consumer group
            management. This places an upper bound on the amount of time that
            the consumer can be idle before fetching more records. If
            :meth:`~kafka.KafkaConsumer.poll` is not called before expiration
            of this timeout, then the consumer is considered failed and the
            group will rebalance in order to reassign the partitions to another
            member. Default 300000   两次poll执行的时间间隔,如果达到这个间隔,就会引起这个consumer failed和group rebalance,group会把这个consumer连接的分区分配给其他consumer。
        session_timeout_ms (int): The timeout used to detect failures when
            using Kafkas group management facilities. The consumer sends
            periodic heartbeats to indicate its liveness to the broker. If
            no heartbeats are received by the broker before the expiration of
            this session timeout, then the broker will remove this consumer
            from the group and initiate a rebalance. Note that the value must
            be in the allowable range as configured in the broker configuration
            by group.min.session.timeout.ms and group.max.session.timeout.ms.
            Default: 10000    kafka服务器如果在超过会话时间后,还没有收到客户端发送的心跳报文,就会移除这个consumer,进行rebalance,把分区分配给其他消费者。
        heartbeat_interval_ms (int): The expected time in milliseconds
            between heartbeats to the consumer coordinator when using
            Kafkas group management facilities. Heartbeats are used to ensure
            that the consumers session stays active and to facilitate
            rebalancing when new consumers join or leave the group. The
            value must be set lower than session_timeout_ms, but typically
            should be set no higher than 1/3 of that value. It can be
            adjusted even lower to control the expected time for normal
            rebalances. Default: 3000  心跳报文时间间隔
        receive_buffer_bytes (int): The size of the TCP receive buffer
            (SO_RCVBUF) to use when reading data. Default: None (relies on
            system defaults). The java client defaults to 32768.
        send_buffer_bytes (int): The size of the TCP send buffer
            (SO_SNDBUF) to use when sending data. Default: None (relies on
            system defaults). The java client defaults to 131072.
        socket_options (list): List of tuple-arguments to socket.setsockopt
            to apply to broker connection sockets. Default:
            [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]
        consumer_timeout_ms (int): number of milliseconds to block during
            message iteration before raising StopIteration (i.e., ending the
            iterator). Default block forever [float(inf)].
        skip_double_compressed_messages (bool): A bug in KafkaProducer <= 1.2.4
            caused some messages to be corrupted via double-compression.
            By default, the fetcher will return these messages as a compressed
            blob of bytes with a single offset, i.e. how the message was
            actually published to the cluster. If you prefer to have the
            fetcher automatically detect corrupt messages and skip them,
            set this option to True. Default: False.
        security_protocol (str): Protocol used to communicate with brokers.
            Valid values are: PLAINTEXT, SSL. Default: PLAINTEXT.
        ssl_context (ssl.SSLContext): Pre-configured SSLContext for wrapping
            socket connections. If provided, all other ssl_* configurations
            will be ignored. Default: None.
        ssl_check_hostname (bool): Flag to configure whether ssl handshake
            should verify that the certificate matches the brokers hostname.
            Default: True.
        ssl_cafile (str): Optional filename of ca file to use in certificate
            verification. Default: None.
        ssl_certfile (str): Optional filename of file in pem format containing
            the client certificate, as well as any ca certificates needed to
            establish the certificates authenticity. Default: None.
        ssl_keyfile (str): Optional filename containing the client private key.
            Default: None.
        ssl_password (str): Optional password to be used when loading the
            certificate chain. Default: None.
        ssl_crlfile (str): Optional filename containing the CRL to check for
            certificate expiration. By default, no CRL check is done. When
            providing a file, only the leaf certificate will be checked against
            this CRL. The CRL can only be checked with Python 3.4+ or 2.7.9+.
            Default: None.
        api_version (tuple): Specify which Kafka API version to use. If set to
            None, the client will attempt to infer the broker version by probing
            various APIs. Different versions enable different functionality.

            Examples:
                (0, 9) enables full group coordination features with automatic
                    partition assignment and rebalancing,
                (0, 8, 2) enables kafka-storage offset commits with manual
                    partition assignment only,
                (0, 8, 1) enables zookeeper-storage offset commits with manual
                    partition assignment only,
                (0, 8, 0) enables basic functionality but requires manual
                    partition assignment and offset management.

            Default: None
        api_version_auto_timeout_ms (int): number of milliseconds to throw a
            timeout exception from the constructor when checking the broker
            api version. Only applies if api_version set to auto
        connections_max_idle_ms: Close idle connections after the number of
            milliseconds specified by this config. The broker closes idle
            connections after connections.max.idle.ms, so this avoids hitting
            unexpected socket disconnected errors on the client.
            Default: 540000    服务端可以接受的客户端的闲置时间。猜测如果客户端不向服务器发送任何信息(心跳等)超过这个时间后,就会被彻底断开
        metric_reporters (list): A list of classes to use as metrics reporters.
            Implementing the AbstractMetricsReporter interface allows plugging
            in classes that will be notified of new metric creation. Default: []
        metrics_num_samples (int): The number of samples maintained to compute
            metrics. Default: 2
        metrics_sample_window_ms (int): The maximum age in milliseconds of
            samples used to compute metrics. Default: 30000
        selector (selectors.BaseSelector): Provide a specific selector
            implementation to use for I/O multiplexing.
            Default: selectors.DefaultSelector
        exclude_internal_topics (bool): Whether records from internal topics
            (such as offsets) should be exposed to the consumer. If set to True
            the only way to receive records from an internal topic is
            subscribing to it. Requires 0.10+ Default: True
        sasl_mechanism (str): String picking sasl mechanism when security_protocol
            is SASL_PLAINTEXT or SASL_SSL. Currently only PLAIN is supported.
            Default: None
        sasl_plain_username (str): Username for sasl PLAIN authentication.
            Default: None
        sasl_plain_password (str): Password for sasl PLAIN authentication.
            Default: None
        sasl_kerberos_service_name (str): Service name to include in GSSAPI
            sasl mechanism handshake. Default: kafka
        sasl_kerberos_domain_name (str): kerberos domain name to use in GSSAPI
            sasl mechanism handshake. Default: one of bootstrap servers

    Note:
        Configuration parameters are described in more detail at
        https://kafka.apache.org/documentation/#newconsumerconfigs
    """

 

以上是关于kafak-python使用补充的主要内容,如果未能解决你的问题,请参考以下文章

VSCode 配置 用户自定义代码片段 自定义自动代码补充

26个jQuery代码片段使用技巧

vs 2010代码片段

vs 2010代码片段

js常用代码片段

VSCODE snippets的使用