基于Helm离线部署高可用Zookeeper

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了基于Helm离线部署高可用Zookeeper相关的知识,希望对你有一定的参考价值。

参考技术A This helm chart provides an implementation of the ZooKeeper StatefulSet found in Kubernetes Contrib Zookeeper StatefulSet .

This chart will do the following:

You can install the chart with the release name myzk as below.

If you do not specify a name, helm will select a name for you.

You can use kubectl get to view all of the installed components.

You can specify each parameter using the --set key=value[,key=value] argument to helm install .

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

The configuration parameters in this section control the resources requested and utilized by the ZooKeeper ensemble.

These parameters control the network ports on which the ensemble communicates.

ZooKeeper uses the Zab protocol to replicate its state machine across the ensemble. The following parameters control the timeouts for the protocol.

ZooKeeper writes its WAL (Write Ahead Log) and periodic snapshots to storage media. These parameters control the retention policy for snapshots and WAL segments. If you do not configure the ensemble to automatically periodically purge snapshots and logs, it is important to implement such a mechanism yourself. Otherwise, you will eventually exhaust all available storage media.

Spreading allows you specify an anti-affinity between ZooKeeper servers in the ensemble. This will prevent the Pods from being scheduled on the same node.

In order to allow for the default installation to work well with the log rolling and retention policy of Kubernetes, all logs are written to stdout. This should also be compatible with logging integrations such as Google Cloud Logging and ELK.

The servers in the ensemble have both liveness and readiness checks specified. These parameters can be used to tune the sensitivity of the liveness and readiness checks.

This parameter controls when the image is pulled from the repository.

The image used for this chart is based on Ubuntu 16.04 LTS. This image is larger than Alpine or BusyBox, but it provides glibc, rather than ulibc or mucl, and a JVM release that is built against it. You can easily convert this chart to run against a smaller image with a JVM that is build against that images libc. However, as far as we know, no Hadoop vendor supports, or has verified, ZooKeeper running on such a JVM.

The Java Virtual Machine used for this chart is the OpenJDK JVM 8u111 JRE (headless).

The ZooKeeper version is the latest stable version (3.4.9). The distribution is installed into /opt/zookeeper-3.4.9. This directory is symbolically linked to /opt/zookeeper. Symlinks are created to simulate a rpm installation into /usr.

You can test failover by killing the leader. Insert a key:

Watch existing members:

Delete Pods and wait for the StatefulSet controller to bring them back up:

Check the previously inserted key:

ZooKeeper can not be safely scaled in versions prior to 3.5.x. There are manual procedures for scaling an ensemble, but as noted in the ZooKeeper 3.5.2 documentation these procedures require a rolling restart, are known to be error prone, and often result in a data loss.

While ZooKeeper 3.5.x does allow for dynamic ensemble reconfiguration (including scaling membership), the current status of the release is still alpha, and it is not recommended for production use.

ActiveMQ高可用集群部署(基于Replicated LevelDB Store + Zookeeper)

ActiveMQ集群的三种模式,如图(官网

1、第一种模式基于共享文件系统实现的,例如NFS、GlusterFs

2、第二种模式是共享一个数据库

3、第三种依赖Zookeeper协调分布式服务

 

如下记载基于第三种方式的安装与配置,基于3台云服务器(m1,m1s1,m1s2)

 

一、安装Zookeeper

官网下载压缩包,三台机器分别解压,在zookeeper根目录下创建 data 和 logs 文件夹,把conf目录下的zoo_sample.cfg文件拷贝一份并改名为zoo.cfg(默认配置名),配置如下

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/home/taydawn/zookeeper/data
dataLogDir=/home/taydawn/zookeeper/logs
# the port at which the clients will connect
clientPort=2181
server.1=0.0.0.0:2888:3888
server.2=m1s1:2888:3888
server.3=m1s2:2888:3888
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

三台机器配置类似。如果三台机器实在同一局域网下,server配置可以直接用主机名;否则需要把本机的zoo.cfg中的本机的主机名改为 0.0.0.0,否则会出错(例子)。

查看暴露在公网上的Zookeeper的服务器信息

echo envi | nc xx.xx.xx.xx 2181

 

二、安装ActiveMQ

官网下载压缩包,三台机器分别解压,修改conf目录下的 activemq.conf,修改持久化配置,三台机器的broker名称必须一样

<persistenceAdapter>
	<!-- <kahaDB directory="${activemq.data}/kahadb"/> -->
	<replicatedLevelDB directory="${activemq.data}/leveldb"
			replicas="3"
			bind="tcp://0.0.0.0:0"
			zkAddress="m1:2181,m1s1:2181,m1s2:2181"
			hostname="m1s1"
			zkPath="/activemq/leveldb-stores"
	/>
</persistenceAdapter>

 三台机器配置类似,其中“zkPath”为zookeeper的创建节点名称,可使用 zkCli.sh -server m1:2181 登录zookeeper命令查看(其中bind配置可以指定port,例如0.0.0.0:61619,0.0.0.0:0代表使用动态端口,由ActiveMQ自己指定空闲端口)

 

三、启动

依次启动Zookeeper的每一个节点,然后再依次启动ActiveMQ的三个节点即可。

 

相关文章

1、集群配置参考文章

2、Zookeeper安全问题

 

以上是关于基于Helm离线部署高可用Zookeeper的主要内容,如果未能解决你的问题,请参考以下文章

Kubernetes部署高可用harbor(helm方式)

Harbor高可用集群设计及部署(实操+视频),基于离线安装方式

Helm部署Harbor,实现高可用的镜像仓库(超详细分享)~后附踩坑记录

K3S 离线安装部署高可用集群

高可用——Keepalived安装部署使用详解

Harbor基于离线安装方式的高可用设计(理论部分)