codis+redis 集群搭建管理
Posted 战神V祝福
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了codis+redis 集群搭建管理相关的知识,希望对你有一定的参考价值。
Codis 是一个分布式 Redis 解决方案, 对于上层的应用来说, 连接到 Codis Proxy 和连接原生的 Redis Server 没有明显的区别 (不支持的命令列表), 上层应用可以像使用单机的 Redis 一样使用, Codis 底层会处理请求的转发, 不停机的数据迁移等工作, 所有后边的一切事情, 对于前面的客户端来说是透明的, 可以简单的认为后边连接的是一个内存无限大的 Redis 服务.
Codis 由四部分组成:
Codis Proxy (codis-proxy)
Codis Dashboard (codis-config)
Codis Redis (codis-server)
ZooKeeper/Etcd
codis-proxy 是客户端连接的 Redis 代理服务, codis-proxy 本身实现了 Redis 协议, 表现得和一个原生的
Redis 没什么区别 (就像 Twemproxy), 对于一个业务来说, 可以部署多个 codis-proxy, codis-proxy
本身是无状态的.
codis-config 是 Codis 的管理工具, 支持包括, 添加/删除 Redis 节点, 添加/删除 Proxy 节点, 发起数据迁移等操作. codis-config 本身还自带了一个 http server, 会启动一个 dashboard, 用户可以直接在浏览器上观察 Codis 集群的运行状态.
codis-server 是 Codis 项目维护的一个 Redis 分支, 基于 2.8.21 开发, 加入了 slot 的支持和原子的数据迁移指令. Codis 上层的 codis-proxy 和 codis-config 只能和这个版本的 Redis 交互才能正常运行.
Codis 依赖 ZooKeeper 来存放数据路由表和 codis-proxy 节点的元信息, codis-config 发起的命令都会通过 ZooKeeper 同步到各个存活的 codis-proxy.
Codis 支持按照 Namespace 区分不同的产品, 拥有不同的 product name 的产品, 各项配置都不会冲突.
————————————————————————————
(摘自:官方Github,https://github.com/CodisLabs/codis/blob/master/doc/tutorial_zh.md)
本文涉及到六台服务器,它们的IP规划与架构初览如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@codis-server1 ~]# cat /etc/hosts
# Local
127.0.0.1 localhost
# Pub
# Server
10.158.1.94 codis-server1
10.158.1.95 codis-server2
# Ha
10.158.1.96 codisha1
10.158.1.97 codisha2
# Zookeeper
10.158.1.98 zookeeper1
10.158.1.99 zookeeper2
[root@codis-server1 ~]#
|
零、所有的服务器。
关闭防火墙:
chkconfig iptables off
关闭SELinux:
sed -i “/SELINUX/enforcing/disabled/” /etc/selinux/config
开启YUM缓存:
sed -i “/keepcache/s/0/1/” /etc/yum.conf
网卡添加外网DNS:
echo “DNS1=114.114.114.114” >> /etc/sysconfig/network-scripts/ifcfg-eth0
service network restart
ping baidu.com -c 3
YUM生成库缓存:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
[root@codis-server1 ~]# ls -ltr /etc/yum.repos.d/
total 16
-rw-r--r--. 1 root root 2593 Jun 26 2012 CentOS-Vault.repo
-rw-r--r--. 1 root root 626 Jun 26 2012 CentOS-Media.repo
-rw-r--r--. 1 root root 637 Jun 26 2012 CentOS-Debuginfo.repo
-rw-r--r--. 1 root root 1926 Jun 26 2012 CentOS-Base.repo
[root@codis-server1 ~]#
[root@codis-server1 ~]# yum makecache
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.163.com
* updates: mirrors.163.com
base | 3.7 kB 00:00
base/group_gz | 226 kB 00:00
base/filelists_db | 6.4 MB 00:13
base/primary_db | 4.7 MB 00:08
base/other_db | 2.8 MB 00:03
extras | 3.4 kB 00:00
extras/filelists_db | 38 kB 00:00
extras/prestodelta | 1.3 kB 00:00
extras/primary_db | 37 kB 00:00
extras/other_db | 51 kB 00:00
updates | 3.4 kB 00:00
updates/filelists_db | 1.1 MB 00:01
updates/prestodelta | 114 kB 00:00
updates/primary_db | 1.4 MB 00:01
updates/other_db | 17 MB 00:50
Metadata Cache Created
[root@codis-server1 ~]#
|
Java支持:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
|
[root@codis-server1 ~]# yum list | grep --color ^java | grep --color jdk
java-1.6.0-openjdk.x86_64 1:1.6.0.0-1.45.1.11.1.el6 @anaconda-CentOS-201207061011.x86_64/6.3
java-1.6.0-openjdk.x86_64 1:1.6.0.39-1.13.11.1.el6_8 updates
java-1.6.0-openjdk-demo.x86_64 1:1.6.0.39-1.13.11.1.el6_8 updates
java-1.6.0-openjdk-devel.x86_64 1:1.6.0.39-1.13.11.1.el6_8 updates
java-1.6.0-openjdk-javadoc.x86_64 1:1.6.0.39-1.13.11.1.el6_8 updates
java-1.6.0-openjdk-src.x86_64 1:1.6.0.39-1.13.11.1.el6_8 updates
java-1.7.0-openjdk.x86_64 1:1.7.0.111-2.6.7.2.el6_8 updates
java-1.7.0-openjdk-demo.x86_64 1:1.7.0.111-2.6.7.2.el6_8 updates
java-1.7.0-openjdk-devel.x86_64 1:1.7.0.111-2.6.7.2.el6_8 updates
java-1.7.0-openjdk-javadoc.noarch 1:1.7.0.111-2.6.7.2.el6_8 updates
java-1.7.0-openjdk-src.x86_64 1:1.7.0.111-2.6.7.2.el6_8 updates
java-1.8.0-openjdk.x86_64 1:1.8.0.101-3.b13.el6_8 updates
java-1.8.0-openjdk-debug.x86_64 1:1.8.0.101-3.b13.el6_8 updates
java-1.8.0-openjdk-demo.x86_64 1:1.8.0.101-3.b13.el6_8 updates
java-1.8.0-openjdk-demo-debug.x86_64 1:1.8.0.101-3.b13.el6_8 updates
java-1.8.0-openjdk-devel.x86_64 1:1.8.0.101-3.b13.el6_8 updates
java-1.8.0-openjdk-devel-debug.x86_64 1:1.8.0.101-3.b13.el6_8 updates
java-1.8.0-openjdk-headless.x86_64 1:1.8.0.101-3.b13.el6_8 updates
java-1.8.0-openjdk-headless-debug.x86_64 1:1.8.0.101-3.b13.el6_8 updates
java-1.8.0-openjdk-javadoc.noarch 1:1.8.0.101-3.b13.el6_8 updates
java-1.8.0-openjdk-javadoc-debug.noarch 1:1.8.0.101-3.b13.el6_8 updates
java-1.8.0-openjdk-src.x86_64 1:1.8.0.101-3.b13.el6_8 updates
java-1.8.0-openjdk-src-debug.x86_64 1:1.8.0.101-3.b13.el6_8 updates
[root@codis-server1 ~]#
[root@codis-server1 ~]# yum install java-1.8.0-openjdk
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.163.com
* updates: mirrors.163.com
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package java-1.8.0-openjdk.x86_64 1:1.8.0.101-3.b13.el6_8 will be installed
--> Processing Dependency: java-1.8.0-openjdk-headless = 1:1.8.0.101-3.b13.el6_8 for package: 1:java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64
--> Processing Dependency: libjpeg.so.62(LIBJPEG_6.2)(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64
--> Running transaction check
---> Package java-1.8.0-openjdk-headless.x86_64 1:1.8.0.101-3.b13.el6_8 will be installed
--> Processing Dependency: tzdata-java >= 2014f-1 for package: 1:java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el6_8.x86_64
---> Package libjpeg.x86_64 0:6b-46.el6 will be obsoleted
---> Package libjpeg-turbo.x86_64 0:1.2.1-3.el6_5 will be obsoleting
--> Running transaction check
---> Package tzdata-java.noarch 0:2012c-1.el6 will be updated
---> Package tzdata-java.noarch 0:2016f-1.el6 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=====================================================================================================================
Package Arch Version Repository Size
=====================================================================================================================
Installing:
java-1.8.0-openjdk x86_64 1:1.8.0.101-3.b13.el6_8 updates 197 k
libjpeg-turbo x86_64 1.2.1-3.el6_5 base 174 k
replacing libjpeg.x86_64 6b-46.el6
Installing for dependencies:
java-1.8.0-openjdk-headless x86_64 1:1.8.0.101-3.b13.el6_8 updates 32 M
Updating for dependencies:
tzdata-java noarch 2016f-1.el6 updates 180 k
Transaction Summary
=====================================================================================================================
Install 3 Package(s)
Upgrade 1 Package(s)
Total download size: 32 M
Is this ok [y/N]: y
Downloading Packages:
(1/4): java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64.rpm | 197 kB 00:00
(2/4): java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el6_8.x86_64.rpm | 32 MB 01:12
(3/4): libjpeg-turbo-1.2.1-3.el6_5.x86_64.rpm | 174 kB 00:00
(4/4): tzdata-java-2016f-1.el6.noarch.rpm | 180 kB 00:00
---------------------------------------------------------------------------------------------------------------------
Total 450 kB/s | 32 MB 01:13
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA1 Signature, key ID c105b9de: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
Importing GPG key 0xC105B9DE:
Userid : CentOS-6 Key (CentOS 6 Official Signing Key) <centos-6-key@centos.org>
Package: centos-release-6-3.el6.centos.9.x86_64 (@anaconda-CentOS-201207061011.x86_64/6.3)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
Is this ok [y/N]: y
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libjpeg-turbo-1.2.1-3.el6_5.x86_64 1/6
Updating : tzdata-java-2016f-1.el6.noarch 2/6
Installing : 1:java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el6_8.x86_64 3/6
Installing : 1:java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64 4/6
Cleanup : tzdata-java-2012c-1.el6.noarch 5/6
Erasing : libjpeg-6b-46.el6.x86_64 6/6
Verifying : 1:java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el6_8.x86_64 1/6
Verifying : 1:java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64 2/6
Verifying : tzdata-java-2016f-1.el6.noarch 3/6
Verifying : libjpeg-turbo-1.2.1-3.el6_5.x86_64 4/6
Verifying : libjpeg-6b-46.el6.x86_64 5/6
Verifying : tzdata-java-2012c-1.el6.noarch 6/6
Installed:
java-1.8.0-openjdk.x86_64 1:1.8.0.101-3.b13.el6_8 libjpeg-turbo.x86_64 0:1.2.1-3.el6_5
Dependency Installed:
java-1.8.0-openjdk-headless.x86_64 1:1.8.0.101-3.b13.el6_8
Dependency Updated:
tzdata-java.noarch 0:2016f-1.el6
Replaced:
libjpeg.x86_64 0:6b-46.el6
Complete!
[root@codis-server1 ~]#
[root@codis-server1 ~]# java -version
openjdk version "1.8.0_101"
OpenJDK Runtime Environment (build 1.8.0_101-b13)
OpenJDK 64-Bit Server VM (build 25.101-b13, mixed mode)
[root@codis-server1 ~]#
|
一、部署:Zookeeper。
官方项目主页:
http://zookeeper.apache.org/
官方介质下载页:
http://zookeeper.apache.org/releases.html
这里我下载最新版本:3.5.2 alpha。
Download Link:
https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.2-alpha/zookeeper-3.5.2-alpha.tar.gz
在98 / 99,两个IP上部署ZOOKEEPER。
(这里,仅演示在其中一台上搭建ZK的过程,第二台的方式类似)
上传软件介质到服务器:
1
2
3
4
5
6
7
8
9
|
[root@zookeeper1 ~]# mkdir /software
[root@zookeeper1 ~]#
[root@zookeeper1 ~]# ls -ltr /software
total 18012
-rw-r--r-- 1 root root 18443679 Aug 19 2016 zookeeper-3.5.2-alpha.tar.gz
[root@zookeeper1 ~]#
[root@zookeeper1 ~]# du -sh /software
18M /software
[root@zookeeper1 ~]#
|
解压:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
[root@zookeeper1 ~]# cd /software
[root@zookeeper1 software]# ls
zookeeper-3.5.2-alpha.tar.gz
[root@zookeeper1 software]#
[root@zookeeper1 software]# tar -xzf zookeeper-3.5.2-alpha.tar.gz
[root@zookeeper1 software]#
[root@zookeeper1 software]# ls
zookeeper-3.5.2-alpha zookeeper-3.5.2-alpha.tar.gz
[root@zookeeper1 software]# cd zookeeper-3.5.2-alpha
[root@zookeeper1 zookeeper-3.5.2-alpha]# ls
bin contrib ivy.xml README_packaging.txt zookeeper-3.5.2-alpha.jar
build.xml dist-maven lib README.txt zookeeper-3.5.2-alpha.jar.asc
CHANGES.txt docs LICENSE.txt recipes zookeeper-3.5.2-alpha.jar.md5
conf ivysettings.xml NOTICE.txt src zookeeper-3.5.2-alpha.jar.sha1
[root@zookeeper1 zookeeper-3.5.2-alpha]#
|
移动到zookeeper的安装目录【/opt/zookeeper】:
1
2
3
4
5
6
7
8
|
[root@zookeeper1 zookeeper-3.5.2-alpha]# mkdir /opt/zookeeper
[root@zookeeper1 zookeeper-3.5.2-alpha]# cp -rf * /opt/zookeeper/
[root@zookeeper1 zookeeper-3.5.2-alpha]# ls /opt/zookeeper/
bin contrib ivy.xml README_packaging.txt zookeeper-3.5.2-alpha.jar
build.xml dist-maven lib README.txt zookeeper-3.5.2-alpha.jar.asc
CHANGES.txt docs LICENSE.txt recipes zookeeper-3.5.2-alpha.jar.md5
conf ivysettings.xml NOTICE.txt src zookeeper-3.5.2-alpha.jar.sha1
[root@zookeeper1 zookeeper-3.5.2-alpha]#
|
配置zookeeper配置文件:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
[root@zookeeper1 zookeeper]# pwd
/opt/zookeeper
[root@zookeeper1 zookeeper]# ls -ltr | grep conf
drwxr-xr-x 2 root root 4096 Aug 19 01:12 conf
[root@zookeeper1 zookeeper]#
[root@zookeeper1 zookeeper]# cd conf/
[root@zookeeper1 conf]# ls
configuration.xsl log4j.properties zoo_sample.cfg
[root@zookeeper1 conf]#
[root@zookeeper1 conf]# vi zoo.cfg
[root@zookeeper1 conf]# cat zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/zk1/data
dataLogDir=/data/zookeeper/zk1/log
clientPort=2181
server.1=zookeeper1:2287:3387
server.2=zookeeper2:2288:3388
[root@zookeeper1 conf]#
|
节点二【zookeeper2】的这个配置文件是:
1
2
3
4
5
6
7
8
9
10
|
[root@zookeeper2 conf]# cat zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/zk2/data
dataLogDir=/data/zookeeper/zk2/log
clientPort=2182
server.1=zookeeper1:2287:3387
server.2=zookeeper2:2288:3388
[root@zookeeper2 conf]#
|
创建上面配置文件中需要的目录结构:
mkdir -p /data/zookeeper/zk1/{data,log}
创建myid文件:
【zookeeper1】
1
2
3
4
|
[root@zookeeper1 conf]# echo "1" > /data/zookeeper/zk1/data/myid
[root@zookeeper1 conf]# cat /data/zookeeper/zk1/data/myid
1
[root@zookeeper1 conf]#
|
【zookeeper2】
1
2
3
4
|
[root@zookeeper2 conf]# echo "2" > /data/zookeeper/zk2/data/myid
[root@zookeeper2 conf]# cat /data/zookeeper/zk2/data/myid
2
[root@zookeeper2 conf]#
|
启动zookeeper:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
|
[root@zookeeper1 zookeeper]# pwd
/opt/zookeeper
[root@zookeeper1 zookeeper]#
[root@zookeeper1 zookeeper]# ls
bin contrib ivy.xml README_packaging.txt zookeeper-3.5.2-alpha.jar
build.xml dist-maven lib README.txt zookeeper-3.5.2-alpha.jar.asc
CHANGES.txt docs LICENSE.txt recipes zookeeper-3.5.2-alpha.jar.md5
conf ivysettings.xml NOTICE.txt src zookeeper-3.5.2-alpha.jar.sha1
[root@zookeeper1 zookeeper]#
[root@zookeeper1 zookeeper]# ls bin/ -ltr
total 48
-rwxr-xr-x 1 root root 9035 Aug 19 01:12 zkServer.sh
-rwxr-xr-x 1 root root 4573 Aug 19 01:12 zkServer-initialize.sh
-rwxr-xr-x 1 root root 1260 Aug 19 01:12 zkServer.cmd
-rwxr-xr-x 1 root root 3460 Aug 19 01:12 zkEnv.sh
-rwxr-xr-x 1 root root 1585 Aug 19 01:12 zkEnv.cmd
-rwxr-xr-x 1 root root 1621 Aug 19 01:12 zkCli.sh
-rwxr-xr-x 1 root root 1128 Aug 19 01:12 zkCli.cmd
-rwxr-xr-x 1 root root 2067 Aug 19 01:12 zkCleanup.sh
-rwxr-xr-x 1 root root 232 Aug 19 01:12 README.txt
[root@zookeeper1 zookeeper]#
[root@zookeeper1 zookeeper]# bin/zkServer.sh start
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zookeeper1 zookeeper]#
[root@zookeeper1 zookeeper]# ps -ef | grep --color zookeeper
avahi 1387 1 0 00:16 ? 00:00:00 avahi-daemon: running [zookeeper1.local]
root 2680 1 6 01:30 pts/0 00:00:01 java -Dzookeeper.log.dir=/opt/zookeeper/bin/../logs -Dzookeeper.log.file=zookeeper-root-server-zookeeper1.log -Dzookeeper.root.logger=INFO,CONSOLE -XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError=kill -9 %p -cp /opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/opt/zookeeper/bin/../lib/servlet-api-2.5-20081211.jar:/opt/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper/bin/../lib/jline-2.11.jar:/opt/zookeeper/bin/../lib/jetty-util-6.1.26.jar:/opt/zookeeper/bin/../lib/jetty-6.1.26.jar:/opt/zookeeper/bin/../lib/javacc.jar:/opt/zookeeper/bin/../lib/jackson-mapper-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/jackson-core-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/commons-cli-1.2.jar:/opt/zookeeper/bin/../zookeeper-3.5.2-alpha.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf: -Xmx1000m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/zookeeper/bin/../conf/zoo.cfg
root 2710 1924 0 01:31 pts/0 00:00:00 grep --color zookeeper
[root@zookeeper1 zookeeper]#
[root@zookeeper1 zookeeper]# netstat -tupln | grep java
tcp 0 0 :::2181 :::* LISTEN 2680/java
tcp 0 0 :::8080 :::* LISTEN 2680/java
tcp 0 0 :::50714 :::* LISTEN 2680/java
tcp 0 0 ::ffff:10.158.1.98:3387 :::* LISTEN 2680/java
[root@zookeeper1 zookeeper]#
|
可以看到,启动成功。
测试zookeeper的客户端程序:
【zookeeper1】
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
[root@zookeeper1 zookeeper]# bin/zkCli.sh -server zookeeper1:2181
/usr/bin/java
Connecting to zookeeper1:2181
2016-08-19 01:34:35,967 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.5.2-alpha-1750793, built on 06/30/2016 13:15 GMT
2016-08-19 01:34:35,974 [myid:] - INFO [main:Environment@109] - Client environment:host.name=zookeeper1
2016-08-19 01:34:35,975 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.8.0_101
2016-08-19 01:34:35,979 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2016-08-19 01:34:35,979 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64/jre
2016-08-19 01:34:35,979 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/opt/zookeeper/bin/../lib/servlet-api-2.5-20081211.jar:/opt/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper/bin/../lib/jline-2.11.jar:/opt/zookeeper/bin/../lib/jetty-util-6.1.26.jar:/opt/zookeeper/bin/../lib/jetty-6.1.26.jar:/opt/zookeeper/bin/../lib/javacc.jar:/opt/zookeeper/bin/../lib/jackson-mapper-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/jackson-core-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/commons-cli-1.2.jar:/opt/zookeeper/bin/../zookeeper-3.5.2-alpha.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf:
2016-08-19 01:34:35,979 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2016-08-19 01:34:35,979 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2016-08-19 01:34:35,979 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA>
2016-08-19 01:34:35,980 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux
2016-08-19 01:34:35,980 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64
2016-08-19 01:34:35,980 [myid:] - INFO [main:Environment@109] - Client environment:os.version=2.6.32-279.el6.x86_64
2016-08-19 01:34:35,980 [myid:] - INFO [main:Environment@109] - Client environment:user.name=root
2016-08-19 01:34:35,980 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/root
2016-08-19 01:34:35,981 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/opt/zookeeper
2016-08-19 01:34:35,981 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=236MB
2016-08-19 01:34:35,986 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=247MB
2016-08-19 01:34:35,986 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=241MB
2016-08-19 01:34:35,990 [myid:] - INFO [main:ZooKeeper@855] - Initiating client connection, connectString=zookeeper1:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@51521cc1
Welcome to ZooKeeper!
2016-08-19 01:34:36,052 [myid:zookeeper1:2181] - INFO [main-SendThread(zookeeper1:2181):ClientCnxn$SendThread@1113] - Opening socket connection to server zookeeper1/10.158.1.98:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2016-08-19 01:34:36,268 [myid:zookeeper1:2181] - INFO [main-SendThread(zookeeper1:2181):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /10.158.1.98:53137, server: zookeeper1/10.158.1.98:2181
[zk: zookeeper1:2181(CONNECTING) 0] 2016-08-19 01:34:36,687 [myid:zookeeper1:2181] - INFO [main-SendThread(zookeeper1:2181):ClientCnxn$SendThread@1381] - Session establishment complete on server zookeeper1/10.158.1.98:2181, sessionid = 0x100004492e80000, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: zookeeper1:2181(CONNECTED) 0]
[zk: zookeeper1:2181(CONNECTED) 0] quit
2016-08-19 01:35:50,836 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x100004492e80000
2016-08-19 01:35:50,837 [myid:] - INFO [main:ZooKeeper@1313] - Session: 0x100004492e80000 closed
[root@zookeeper1 zookeeper]#
|
【zookeeper2】
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
[root@zookeeper1 zookeeper]# bin/zkCli.sh -server zookeeper2:2182
/usr/bin/java
Connecting to zookeeper2:2182
2016-08-19 01:36:23,110 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.5.2-alpha-1750793, built on 06/30/2016 13:15 GMT
2016-08-19 01:36:23,114 [myid:] - INFO [main:Environment@109] - Client environment:host.name=zookeeper1
2016-08-19 01:36:23,114 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.8.0_101
2016-08-19 01:36:23,118 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2016-08-19 01:36:23,118 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64/jre
2016-08-19 01:36:23,119 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/opt/zookeeper/bin/../lib/servlet-api-2.5-20081211.jar:/opt/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper/bin/../lib/jline-2.11.jar:/opt/zookeeper/bin/../lib/jetty-util-6.1.26.jar:/opt/zookeeper/bin/../lib/jetty-6.1.26.jar:/opt/zookeeper/bin/../lib/javacc.jar:/opt/zookeeper/bin/../lib/jackson-mapper-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/jackson-core-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/commons-cli-1.2.jar:/opt/zookeeper/bin/../zookeeper-3.5.2-alpha.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf:
2016-08-19 01:36:23,119 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2016-08-19 01:36:23,119 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2016-08-19 01:36:23,119 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA>
2016-08-19 01:36:23,119 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux
2016-08-19 01:36:23,119 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64
2016-08-19 01:36:23,120 [myid:] - INFO [main:Environment@109] - Client environment:os.version=2.6.32-279.el6.x86_64
2016-08-19 01:36:23,120 [myid:] - INFO [main:Environment@109] - Client environment:user.name=root
2016-08-19 01:36:23,120 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/root
2016-08-19 01:36:23,121 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/opt/zookeeper
2016-08-19 01:36:23,121 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=236MB
2016-08-19 01:36:23,126 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=247MB
2016-08-19 01:36:23,126 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=241MB
2016-08-19 01:36:23,130 [myid:] - INFO [main:ZooKeeper@855] - Initiating client connection, connectString=zookeeper2:2182 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@51521cc1
Welcome to ZooKeeper!
2016-08-19 01:36:23,185 [myid:zookeeper2:2182] - INFO [main-SendThread(zookeeper2:2182):ClientCnxn$SendThread@1113] - Opening socket connection to server zookeeper2/10.158.1.99:2182. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2016-08-19 01:36:23,402 [myid:zookeeper2:2182] - INFO [main-SendThread(zookeeper2:2182):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /10.158.1.98:38929, server: zookeeper2/10.158.1.99:2182
[zk: zookeeper2:2182(CONNECTING) 0] 2016-08-19 01:36:23,600 [myid:zookeeper2:2182] - INFO [main-SendThread(zookeeper2:2182):ClientCnxn$SendThread@1381] - Session establishment complete on server zookeeper2/10.158.1.99:2182, sessionid = 0x20000448b090000, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: zookeeper2:2182(CONNECTED) 0]
[zk: zookeeper2:2182(CONNECTED) 0] quit
2016-08-19 01:36:29,061 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x20000448b090000
2016-08-19 01:36:29,062 [myid:] - INFO [main:ZooKeeper@1313] - Session: 0x20000448b090000 closed
[root@zookeeper1 zookeeper]#
|
这样,ZOOKEEPER,就部署好了。
二、部署:Codis。
(和上面一样,这里仅演示在一台节点上的操作详情,另一台也是类似的。在我的环境中,一共有两台Codis Server。)
1. Go语言支持。
官方网站:https://golang.org/
当前最新版本:1.7。
官方下载链接:
Linux:https://storage.googleapis.com/golang/go1.7.linux-amd64.tar.gz
源码包:https://storage.googleapis.com/golang/go1.7.src.tar.gz
MS Windows:https://storage.googleapis.com/golang/go1.7.windows-amd64.msi
上传介质到服务器:
1
2
3
4
5
6
7
8
9
10
|
[root@codis-server1 ~]# mkdir /software
[root@codis-server1 ~]# cd /software/
[root@codis-server1 software]# ls
go1.7.linux-amd64.tar.gz
[root@codis-server1 software]# ls -ltr *
-rw-r--r-- 1 root root 81573766 Aug 19 00:40 go1.7.linux-amd64.tar.gz
[root@codis-server1 software]#
[root@codis-server1 software]# du -sh *
78M go1.7.linux-amd64.tar.gz
[root@codis-server1 software]#
|
将go安装到/opt/golang。
创建目录:
1
2
3
4
5
6
7
8
9
|
[root@codis-server1 software]# ls /opt
rh
[root@codis-server1 software]# mkdir /opt/golang
[root@codis-server1 software]#
[root@codis-server1 software]# ls -ltr /opt
total 8
drwxr-xr-x. 2 root root 4096 Jun 22 2012 rh
drwxr-xr-x 2 root root 4096 Aug 19 09:28 golang
[root@codis-server1 software]#
|
将安装介质解压到安装目录:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
[root@codis-server1 software]# pwd
/software
[root@codis-server1 software]#
[root@codis-server1 software]# ls /opt/golang/
[root@codis-server1 software]#
[root@codis-server1 software]# ls
go1.7.linux-amd64.tar.gz
[root@codis-server1 software]#
[root@codis-server1 software]# tar -C /opt/golang/ -xzf go1.7.linux-amd64.tar.gz
[root@codis-server1 software]#
[root@codis-server1 software]# ls -ltr /opt/golang/
total 4
drwxr-xr-x 11 root root 4096 Aug 16 06:51 go
[root@codis-server1 software]# ls -ltr /opt/golang/go/
total 144
-rw-r--r-- 1 root root 1638 Aug 16 06:47 README.md
-rw-r--r-- 1 root root 1303 Aug 16 06:47 PATENTS
-rw-r--r-- 1 root root 1479 Aug 16 06:47 LICENSE
-rw-r--r-- 1 root root 40192 Aug 16 06:47 CONTRIBUTORS
-rw-r--r-- 1 root root 1168 Aug 16 06:47 CONTRIBUTING.md
-rw-r--r-- 1 root root 29041 Aug 16 06:47 AUTHORS
drwxr-xr-x 2 root root 4096 Aug 16 06:47 api
-rw-r--r-- 1 root root 26 Aug 16 06:47 robots.txt
drwxr-xr-x 3 root root 4096 Aug 16 06:47 lib
-rw-r--r-- 1 root root 1150 Aug 16 06:47 favicon.ico
drwxr-xr-x 8 root root 4096 Aug 16 06:47 doc
drwxr-xr-x 45 root root 4096 Aug 16 06:48 src
-rw-r--r-- 1 root root 5 Aug 16 06:48 VERSION
drwxr-xr-x 18 root root 12288 Aug 16 06:51 test
drwxr-xr-x 7 root root 4096 Aug 16 06:51 pkg
drwxr-xr-x 14 root root 4096 Aug 16 06:51 misc
drwxr-xr-x 4 root root 4096 Aug 16 06:51 blog
drwxr-xr-x 2 root root 4096 Aug 16 06:51 bin
[root@codis-server1 software]#
|
配置GO的环境变量:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
[root@codis-server1 software]# cat ~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Golang
export GOROOT=/opt/golang/go
export GOPATH=/data/go_me
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
[root@codis-server1 software]#
[root@codis-server1 software]# source ~/.bash_profile
[root@codis-server1 software]#
[root@codis-server1 software]# env | grep --color GO
GOROOT=/opt/golang/go
GOPATH=/data/go_me
[root@codis-server1 software]# env | grep --color PATH | grep --color go
PATH=/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/root/bin:/opt/golang/go/bin:/data/go_me/bin
GOPATH=/data/go_me
[root@codis-server1 software]#
|
其中:
GOROOT,Go语言的软件根目录
GOPATH,Go语言的项目根目录
创建Go语言的项目目录:
1
2
3
4
5
6
7
|
[root@codis-server1 software]# ls -ltr / | grep data
[root@codis-server1 software]# mkdir -p /data/go_me
[root@codis-server1 software]#
[root@codis-server1 software]# ls -ltr /data/
total 4
drwxr-xr-x 2 root root 4096 Aug 19 09:36 go_me
[root@codis-server1 software]#
|
测试Go的可用性:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@codis-server1 software]# go version
go version go1.7 linux/amd64
[root@codis-server1 software]#
[root@codis-server1 software]# go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/data/go_me"
GORACE=""
GOROOT="/opt/golang/go"
GOTOOLDIR="/opt/golang/go/pkg/tool/linux_amd64"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"
CXX="g++"
CGO_ENABLED="1"
[root@codis-server1 software]#
|
2. 安装:Codis。
Codis项目主页:
https://github.com/CodisLabs/codis
开源中国的介绍:
http://www.oschina.net/p/codis/?fromerr=PIDoyfcY
GIT支持:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
|
[root@codis-server1 ~]# yum list | grep --color ^git
git.x86_64 1.7.1-4.el6_7.1 base
git-all.noarch 1.7.1-4.el6_7.1 base
git-cvs.noarch 1.7.1-4.el6_7.1 base
git-daemon.x86_64 1.7.1-4.el6_7.1 base
git-email.noarch 1.7.1-4.el6_7.1 base
git-gui.noarch 1.7.1-4.el6_7.1 base
git-svn.noarch 1.7.1-4.el6_7.1 base
gitk.noarch 1.7.1-4.el6_7.1 base
gitweb.noarch 1.7.1-4.el6_7.1 base
[root@codis-server1 ~]#
[root@codis-server1 ~]# yum install -y git
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: mirrors.neusoft.edu.cn
* extras: mirrors.neusoft.edu.cn
* updates: centos.ustc.edu.cn
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package git.x86_64 0:1.7.1-4.el6_7.1 will be installed
--> Processing Dependency: perl-Git = 1.7.1-4.el6_7.1 for package: git-1.7.1-4.el6_7.1.x86_64
--> Processing Dependency: perl(Git) for package: git-1.7.1-4.el6_7.1.x86_64
--> Processing Dependency: perl(Error) for package: git-1.7.1-4.el6_7.1.x86_64
--> Processing Dependency: libz.so.1(ZLIB_1.2.0)(64bit) for package: git-1.7.1-4.el6_7.1.x86_64
--> Processing Dependency: libssl.so.10(libssl.so.10)(64bit) for package: git-1.7.1-4.el6_7.1.x86_64
--> Processing Dependency: libcrypto.so.10(libcrypto.so.10)(64bit) for package: git-1.7.1-4.el6_7.1.x86_64
--> Running transaction check
---> Package openssl.x86_64 0:1.0.0-20.el6_2.5 will be updated
---> Package openssl.x86_64 0:1.0.1e-48.el6_8.1 will be an update
---> Package perl-Error.noarch 1:0.17015-4.el6 will be installed
---> Package perl-Git.noarch 0:1.7.1-4.el6_7.1 will be installed
---> Package zlib.x86_64 0:1.2.3-27.el6 will be updated
---> Package zlib.x86_64 0:1.2.3-29.el6 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=====================================================================================================================
Package Arch Version Repository Size
=====================================================================================================================
Installing:
git x86_64 1.7.1-4.el6_7.1 base 4.6 M
Installing for dependencies:
perl-Error noarch 1:0.17015-4.el6 base 29 k
perl-Git noarch 1.7.1-4.el6_7.1 base 28 k
Updating for dependencies:
openssl x86_64 1.0.1e-48.el6_8.1 updates 1.5 M
zlib x86_64 1.2.3-29.el6 base 73 k
Transaction Summary
=====================================================================================================================
Install 3 Package(s)
Upgrade 2 Package(s)
Total download size: 6.3 M
Downloading Packages:
(1/5): git-1.7.1-4.el6_7.1.x86_64.rpm | 4.6 MB 00:23
(2/5): openssl-1.0.1e-48.el6_8.1.x86_64.rpm | 1.5 MB 00:04
(3/5): perl-Error-0.17015-4.el6.noarch.rpm | 29 kB 00:00
(4/5): perl-Git-1.7.1-4.el6_7.1.noarch.rpm | 28 kB 00:00
(5/5): zlib-1.2.3-29.el6.x86_64.rpm | 73 kB 00:00
---------------------------------------------------------------------------------------------------------------------
Total 218 kB/s | 6.3 MB 00:29
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Updating : zlib-1.2.3-29.el6.x86_64 1/7
Installing : 1:perl-Error-0.17015-4.el6.noarch 2/7
Updating : openssl-1.0.1e-48.el6_8.1.x86_64 3/7
Installing : git-1.7.1-4.el6_7.1.x86_64 4/7
Installing : perl-Git-1.7.1-4.el6_7.1.noarch 5/7
Cleanup : openssl-1.0.0-20.el6_2.5.x86_64 6/7
Cleanup : zlib-1.2.3-27.el6.x86_64 7/7
Verifying : 1:perl-Error-0.17015-4.el6.noarch 1/7
Verifying : git-1.7.1-4.el6_7.1.x86_64 2/7
Verifying : zlib-1.2.3-29.el6.x86_64 3/7
Verifying : perl-Git-1.7.1-4.el6_7.1.noarch 4/7
Verifying : openssl-1.0.1e-48.el6_8.1.x86_64 5/7
Verifying : zlib-1.2.3-27.el6.x86_64 6/7
Verifying : openssl-1.0.0-20.el6_2.5.x86_64 7/7
Installed:
git.x86_64 0:1.7.1-4.el6_7.1
Dependency Installed:
perl-Error.noarch 1:0.17015-4.el6 perl-Git.noarch 0:1.7.1-4.el6_7.1
Dependency Updated:
openssl.x86_64 0:1.0.1e-48.el6_8.1 zlib.x86_64 0:1.2.3-29.el6
Complete!
[root@codis-server1 ~]#
[root@codis-server1 ~]# git --version
git version 1.7.1
[root@codis-server1 ~]#
|
获得Codis代码(需要服务器有git软件包的支持):
1
2
3
4
5
6
7
8
9
|
[root@codis-server1 ~]# cd $GOPATH
[root@codis-server1 go_me]# pwd
/data/go_me
[root@codis-server1 go_me]# ls -ltr
total 0
[root@codis-server1 go_me]# go get -u -d github.com/CodisLabs/codis
(... ... 等待一段时间。)
package github.com/CodisLabs/codis: no buildable Go source files in /data/go_me/src/github.com/CodisLabs/codis
[root@codis-server1 go_me]#
|
官方建议通过“go get”来获取codis,该命令会下载master分支的最新版,而master分支则由codis作者维护。
这一步的执行过程可能会比较久,而命令行挂起,看不到进程。
你可以开启另一个会话,监控GOPATH目录的容量变化,如下:
watch -n .1 -d “du -sh /data/go_me”
这样会比较显而易见:
大概25M左右。
如果这一步没有git支持,则你会遇到如下错误:
1
2
3
4
|
[root@codis-server1 go_me]# go get -u -d github.com/CodisLabs/codis
go: missing Git command. See https://golang.org/s/gogetcmd
package github.com/CodisLabs/codis: exec: "git": executable file not found in $PATH
[root@codis-server1 go_me]#
|
最终下载成功后:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
|
[root@codis-server1 go_me]# du -sh src/
32M src/
[root@codis-server1 go_me]#
[root@codis-server1 go_me]# ls -ltr
total 4
drwxr-xr-x 3 root root 4096 Aug 19 09:52 src
[root@codis-server1 go_me]#
[root@codis-server1 go_me]# ls -ltr src/
total 4
drwxr-xr-x 3 root root 4096 Aug 19 09:52 github.com
[root@codis-server1 go_me]#
[root@codis-server1 go_me]# ls -ltr src/github.com/
total 4
drwxr-xr-x 3 root root 4096 Aug 19 09:52 CodisLabs
[root@codis-server1 go_me]#
[root@codis-server1 go_me]# ls -ltr src/github.com/CodisLabs/
total 4
drwxr-xr-x 11 root root 4096 Aug 19 10:07 codis
[root@codis-server1 go_me]#
[root@codis-server1 go_me]# ls -ltr src/github.com/CodisLabs/codis/
total 64
-rw-r--r-- 1 root root 1076 Aug 19 10:07 MIT-LICENSE.txt
drwxr-xr-x 2 root root 4096 Aug 19 10:07 Godeps
-rw-r--r-- 1 root root 320 Aug 19 10:07 Dockerfile
-rw-r--r-- 1 root root 2949 Aug 19 10:07 README.md
-rw-r--r-- 1 root root 775 Aug 19 10:07 Makefile
-rw-r--r-- 1 root root 1817 Aug 19 10:07 config.ini
drwxr-xr-x 4 root root 4096 Aug 19 10:07 cmd
drwxr-xr-x 2 root root 4096 Aug 19 10:07 docker
drwxr-xr-x 5 root root 4096 Aug 19 10:07 doc
drwxr-xr-x 5 root root 4096 Aug 19 10:07 extern
-rwxr-xr-x 1 root root 293 Aug 19 10:07 genver.sh
drwxr-xr-x 5 root root 4096 Aug 19 10:07 pkg
drwxr-xr-x 2 root root 4096 Aug 19 10:07 test
drwxr-xr-x 3 root root 4096 Aug 19 10:07 vendor
-rw-r--r-- 1 root root 1081 Aug 19 10:07 wandoujia_licese.txt
-rw-r--r-- 1 root root 1475 Aug 19 10:07 vitess_license
[root@codis-server1 go_me]#
|
编译前,需要安装的软件包:
yum install -y gcc
yum groupinstall “Development Tools” # 很重要!
安装:jemalloc支持:
下载jemalloc:
http://www.canonware.com/download/jemalloc/jemalloc-4.2.1.tar.bz2
上传服务器,并安装:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
|
[root@codis-server1 codis]# cd /software
[root@codis-server1 software]# ls
go1.7.linux-amd64.tar.gz jemalloc-4.2.1.tar.bz2
[root@codis-server1 software]#
[root@codis-server1 software]# du -sh *
78M go1.7.linux-amd64.tar.gz
424K jemalloc-4.2.1.tar.bz2
[root@codis-server1 software]#
[root@codis-server1 software]# tar -jxf jemalloc-4.2.1.tar.bz2
[root@codis-server1 software]# ls -ltr
total 80092
drwxr-xr-x 9 1000 1000 4096 Jun 9 02:44 jemalloc-4.2.1
-rw-r--r-- 1 root root 81573766 Aug 19 00:40 go1.7.linux-amd64.tar.gz
-rw-r--r-- 1 root root 431132 Aug 19 10:23 jemalloc-4.2.1.tar.bz2
[root@codis-server1 software]# cd jemalloc-4.2.1
[root@codis-server1 jemalloc-4.2.1]# ls
autogen.sh build-aux config.stamp.in configure.ac coverage.sh include jemalloc.pc.in msvc src VERSION
bin ChangeLog configure COPYING doc INSTALL Makefile.in README test
[root@codis-server1 jemalloc-4.2.1]#
[root@codis-server1 jemalloc-4.2.1]# mkdir /opt/jemalloc
[root@codis-server1 jemalloc-4.2.1]#
[root@codis-server1 jemalloc-4.2.1]# ./configure --prefix=/opt/jemalloc
(... ... 过多的输出。)
checking whether to force 64-bit __sync_{add,sub}_and_fetch()... no
checking for __builtin_clz... yes
checking whether Darwin OSSpin*() is compilable... no
checking whether glibc malloc hook is compilable... yes
checking whether glibc memalign hook is compilable... yes
checking whether pthreads adaptive mutexes is compilable... yes
checking for stdbool.h that conforms to C99... yes
checking for _Bool... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating jemalloc.pc
config.status: creating doc/html.xsl
config.status: creating doc/manpages.xsl
config.status: creating doc/jemalloc.xml
config.status: creating include/jemalloc/jemalloc_macros.h
config.status: creating include/jemalloc/jemalloc_protos.h
config.status: creating include/jemalloc/jemalloc_typedefs.h
config.status: creating include/jemalloc/internal/jemalloc_internal.h
config.status: creating test/test.sh
config.status: creating test/include/test/jemalloc_test.h
config.status: creating config.stamp
config.status: creating bin/jemalloc-config
config.status: creating bin/jemalloc.sh
config.status: creating bin/jeprof
config.status: creating include/jemalloc/jemalloc_defs.h
config.status: creating include/jemalloc/internal/jemalloc_internal_defs.h
config.status: creating test/include/test/jemalloc_test_defs.h
config.status: executing include/jemalloc/internal/private_namespace.h commands
config.status: executing include/jemalloc/internal/private_unnamespace.h commands
config.status: executing include/jemalloc/internal/public_symbols.txt commands
config.status: executing include/jemalloc/internal/public_namespace.h commands
config.status: executing include/jemalloc/internal/public_unnamespace.h commands
config.status: executing include/jemalloc/internal/size_classes.h commands
config.status: executing include/jemalloc/jemalloc_protos_jet.h commands
config.status: executing include/jemalloc/jemalloc_rename.h commands
config.status: executing include/jemalloc/jemalloc_mangle.h commands
config.status: executing include/jemalloc/jemalloc_mangle_jet.h commands
config.status: executing include/jemalloc/jemalloc.h commands
===============================================================================
jemalloc version : 4.2.1-0-g3de035335255d553bdb344c32ffdb603816195d8
library revision : 2
CONFIG : --prefix=/opt/jemalloc
CC : gcc
CFLAGS : -std=gnu99 -Wall -Werror=declaration-after-statement -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops
CPPFLAGS : -D_GNU_SOURCE -D_REENTRANT
LDFLAGS :
EXTRA_LDFLAGS :
LIBS : -lrt -lpthread
RPATH_EXTRA :
XSLTPROC : /usr/bin/xsltproc
XSLROOT :
PREFIX : /opt/jemalloc
BINDIR : /opt/jemalloc/bin
DATADIR : /opt/jemalloc/share
INCLUDEDIR : /opt/jemalloc/include
LIBDIR : /opt/jemalloc/lib
MANDIR : /opt/jemalloc/share/man
srcroot :
abs_srcroot : /software/jemalloc-4.2.1/
objroot :
abs_objroot : /software/jemalloc-4.2.1/
JEMALLOC_PREFIX :
JEMALLOC_PRIVATE_NAMESPACE
: je_
install_suffix :
malloc_conf :
autogen : 0
cc-silence : 1
debug : 0
code-coverage : 0
stats : 1
prof : 0
prof-libunwind : 0
prof-libgcc : 0
prof-gcc : 0
tcache : 1
fill : 1
utrace : 0
valgrind : 0
xmalloc : 0
munmap : 0
lazy_lock : 0
tls : 1
cache-oblivious : 1
===============================================================================
[root@codis-server1 jemalloc-4.2.1]#
[root@codis-server1 jemalloc-4.2.1]# make
(... ... 过多的输出。)
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/tcache.o src/tcache.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/ticker.o src/ticker.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/tsd.o src/tsd.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/util.o src/util.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/witness.o src/witness.c
ar crus lib/libjemalloc.a src/jemalloc.o src/arena.o src/atomic.o src/base.o src/bitmap.o src/chunk.o src/chunk_dss.o src/chunk_mmap.o src/ckh.o src/ctl.o src/extent.o src/hash.o src/huge.o src/mb.o src/mutex.o src/nstime.o src/pages.o src/prng.o src/prof.o src/quarantine.o src/rtree.o src/stats.o src/tcache.o src/ticker.o src/tsd.o src/util.o src/witness.o
ar crus lib/libjemalloc_pic.a src/jemalloc.pic.o src/arena.pic.o src/atomic.pic.o src/base.pic.o src/bitmap.pic.o src/chunk.pic.o src/chunk_dss.pic.o src/chunk_mmap.pic.o src/ckh.pic.o src/ctl.pic.o src/extent.pic.o src/hash.pic.o src/huge.pic.o src/mb.pic.o src/mutex.pic.o src/nstime.pic.o src/pages.pic.o src/prng.pic.o src/prof.pic.o src/quarantine.pic.o src/rtree.pic.o src/stats.pic.o src/tcache.pic.o src/ticker.pic.o src/tsd.pic.o src/util.pic.o src/witness.pic.o
[root@codis-server1 jemalloc-4.2.1]#
[root@codis-server1 jemalloc-4.2.1]# make install
install -d /opt/jemalloc/bin
install -m 755 bin/jemalloc-config /opt/jemalloc/bin
install -m 755 bin/jemalloc.sh /opt/jemalloc/bin
install -m 755 bin/jeprof /opt/jemalloc/bin
install -d /opt/jemalloc/include/jemalloc
install -m 644 include/jemalloc/jemalloc.h /opt/jemalloc/include/jemalloc
install -d /opt/jemalloc/lib
install -m 755 lib/libjemalloc.so.2 /opt/jemalloc/lib
ln -sf libjemalloc.so.2 /opt/jemalloc/lib/libjemalloc.so
install -d /opt/jemalloc/lib
install -m 755 lib/libjemalloc.a /opt/jemalloc/lib
install -m 755 lib/libjemalloc_pic.a /opt/jemalloc/lib
install -d /opt/jemalloc/lib/pkgconfig
install -m 644 jemalloc.pc /opt/jemalloc/lib/pkgconfig
install -d /opt/jemalloc/share/doc/jemalloc
install -m 644 doc/jemalloc.html /opt/jemalloc/share/doc/jemalloc
install -d /opt/jemalloc/share/man/man3
install -m 644 doc/jemalloc.3 /opt/jemalloc/share/man/man3
[root@codis-server1 jemalloc-4.2.1]#
[root@codis-server1 jemalloc-4.2.1]# ls -ltr /opt/jemalloc/
total 16
drwxr-xr-x 2 root root 4096 Aug 19 11:41 bin
drwxr-xr-x 3 root root 4096 Aug 19 11:41 include
drwxr-xr-x 3 root root 4096 Aug 19 11:41 lib
drwxr-xr-x 4 root root 4096 Aug 19 11:41 share
[root@codis-server1 jemalloc-4.2.1]#
[root@codis-server1 jemalloc-4.2.1]# cat ~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Golang
export GOROOT=/opt/golang/go
export GOPATH=/data/go_me
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
# Jemalloc
export PATH=$PATH:/opt/jemalloc/bin
[root@codis-server1 jemalloc-4.2.1]#
[root@codis-server1 jemalloc-4.2.1]# source ~/.bash_profile
[root@codis-server1 jemalloc-4.2.1]#
[root@codis-server1 jemalloc-4.2.1]# env | grep --color PATH | grep --color jemalloc
PATH=/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/root/bin:/opt/golang/go/bin:/data/go_me/bin:/root/bin:/opt/golang/go/bin:/data/go_me/bin:/opt/jemalloc/bin
[root@codis-server1 jemalloc-4.2.1]#
|
如果没有jemalloc,则编译的时候会出错:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
CC adlist.o
CC ae.o
CC anet.o
CC dict.o
In file included from adlist.c:34:
zmalloc.h:50:31: error: jemalloc/jemalloc.h: No such file or directory
zmalloc.h:55:2: error: #error "Newer version of jemalloc required"
make[2]: *** [adlist.o] Error 1
make[2]: *** Waiting for unfinished jobs....
In file included from ae.c:44:
zmalloc.h:50:31: error: jemalloc/jemalloc.h: No such file or directory
zmalloc.h:55:2: error: #error "Newer version of jemalloc required"
In file included from dict.c:47:
zmalloc.h:50:31: error: jemalloc/jemalloc.h: No such file or directory
zmalloc.h:55:2: error: #error "Newer version of jemalloc required"
make[2]: *** [ae.o] Error 1
make[2]: *** [dict.o] Error 1
|
注意:如果依旧报错,编译时采用:
make MALLOC=libc
Codis,编译:make MALLOC=libc。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
|
[root@codis-server1 go_me]# cd src/github.com/CodisLabs/codis/
[root@codis-server1 codis]# pwd
/data/go_me/src/github.com/CodisLabs/codis
[root@codis-server1 codis]#
[root@codis-server1 codis]# ls
cmd doc Dockerfile genver.sh Makefile pkg test vitess_license
config.ini docker extern Godeps MIT-LICENSE.txt README.md vendor wandoujia_licese.txt
[root@codis-server1 codis]#
[root@codis-server1 codis]# make MALLOC=libc
(... ... 过多的输出。)
CC t_list.o
CC t_set.o
CC t_zset.o
CC t_hash.o
CC config.o
CC aof.o
CC pubsub.o
CC multi.o
CC debug.o
CC sort.o
CC intset.o
CC syncio.o
CC migrate.o
CC endianconv.o
CC slowlog.o
CC scripting.o
CC bio.o
CC rio.o
CC rand.o
CC memtest.o
CC crc64.o
CC crc32.o
CC bitops.o
CC sentinel.o
CC notify.o
CC setproctitle.o
CC hyperloglog.o
CC latency.o
CC sparkline.o
CC slots.o
CC redis-cli.o
CC redis-benchmark.o
CC redis-check-dump.o
CC redis-check-aof.o
LINK redis-benchmark
LINK redis-check-dump
LINK redis-check-aof
LINK redis-server
INSTALL redis-sentinel
LINK redis-cli
Hint: It‘s a good idea to run ‘make test‘ ;)
make[2]: Leaving directory `/data/go_me/src/github.com/CodisLabs/codis/extern/redis-2.8.21/src‘
make[1]: Leaving directory `/data/go_me/src/github.com/CodisLabs/codis/extern/redis-2.8.21‘
[root@codis-server1 codis]#
[root@codis-server1 codis]# make gotest
go test ./pkg/... ./cmd/...
ok github.com/CodisLabs/codis/pkg/models 17.117s
ok github.com/CodisLabs/codis/pkg/proxy 9.351s
ok github.com/CodisLabs/codis/pkg/proxy/redis 2.158s
ok github.com/CodisLabs/codis/pkg/proxy/router 0.656s
? github.com/CodisLabs/codis/pkg/utils [no test files]
? github.com/CodisLabs/codis/pkg/utils/assert [no test files]
? github.com/CodisLabs/codis/pkg/utils/atomic2 [no test files]
ok github.com/CodisLabs/codis/pkg/utils/bytesize 0.003s
? github.com/CodisLabs/codis/pkg/utils/errors [no test files]
? github.com/CodisLabs/codis/pkg/utils/log [no test files]
? github.com/CodisLabs/codis/pkg/utils/trace [no test files]
? github.com/CodisLabs/codis/cmd/cconfig [no test files]
? github.com/CodisLabs/codis/cmd/proxy [no test files]
[root@codis-server1 codis]#
|
执行成功后,会在当前路径下的bin目录中生成可执行文件。
如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@codis-server1 codis]# pwd
/data/go_me/src/github.com/CodisLabs/codis
[root@codis-server1 codis]#
[root@codis-server1 codis]# ll bin
total 30104
drwxr-xr-x 4 root root 4096 Aug 19 13:04 assets
-rwxr-xr-x 1 root root 14554724 Aug 19 13:04 codis-config
-rwxr-xr-x 1 root root 14301446 Aug 19 13:04 codis-proxy
-rwxr-xr-x 1 root root 1960921 Aug 19 13:04 codis-server
[root@codis-server1 codis]#
[root@codis-server1 codis]# file bin/*
bin/assets: directory
bin/codis-config: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), not stripped
bin/codis-proxy: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), not stripped
bin/codis-server: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped
[root@codis-server1 codis]#
|
生成的可执行文件:
codis-config
codis-proxy
codis-server
assents,存放了codis-config的Dashboard Http需要的资源,需要与codis-config放在同一个目录下。
二、部署:Codis集群。
(这里应该是对Codis Proxy的配置。)
确认zookeeper状态是否运行:
【98】
1
2
3
4
5
6
7
8
9
10
11
|
[root@zookeeper1 ~]# netstat -tupln | grep java
tcp 0 0 :::2181 :::* LISTEN 2680/java
tcp 0 0 :::8080 :::* LISTEN 2680/java
tcp 0 0 :::50714 :::* LISTEN 2680/java
tcp 0 0 ::ffff:10.158.1.98:3387 :::* LISTEN 2680/java
[root@zookeeper1 ~]#
[root@zookeeper1 ~]# ps faux | grep zookeeper
avahi 1387 0.0 0.0 27772 1672 ? S 00:16 0:01 avahi-daemon: running [zookeeper1.local]
root 27861 0.0 0.0 103240 840 pts/0 S+ 13:48 0:00 \_ grep zookeeper
root 2680 0.2 0.6 3043276 103012 ? Sl 01:30 1:33 java -Dzookeeper.log.dir=/opt/zookeeper/bin/../logs -Dzookeeper.log.file=zookeeper-root-server-zookeeper1.log -Dzookeeper.root.logger=INFO,CONSOLE -XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError=kill -9 %p -cp /opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/opt/zookeeper/bin/../lib/servlet-api-2.5-20081211.jar:/opt/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper/bin/../lib/jline-2.11.jar:/opt/zookeeper/bin/../lib/jetty-util-6.1.26.jar:/opt/zookeeper/bin/../lib/jetty-6.1.26.jar:/opt/zookeeper/bin/../lib/javacc.jar:/opt/zookeeper/bin/../lib/jackson-mapper-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/jackson-core-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/commons-cli-1.2.jar:/opt/zookeeper/bin/../zookeeper-3.5.2-alpha.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf: -Xmx1000m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/zookeeper/bin/../conf/zoo.cfg
[root@zookeeper1 ~]#
|
【99】
1
2
3
4
5
6
7
8
9
10
11
12
|
[root@zookeeper2 ~]# ps faux | grep zookeeper
avahi 1377 0.0 0.0 27776 1680 ? S 00:16 0:02 avahi-daemon: running [zookeeper2.local]
root 27595 0.0 0.0 103240 840 pts/0 S+ 13:49 0:00 \_ grep zookeeper
root 2505 0.1 0.7 3048416 129460 ? Sl 01:31 1:28 java -Dzookeeper.log.dir=/opt/zookeeper/bin/../logs -Dzookeeper.log.file=zookeeper-root-server-zookeeper2.log -Dzookeeper.root.logger=INFO,CONSOLE -XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError=kill -9 %p -cp /opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/opt/zookeeper/bin/../lib/servlet-api-2.5-20081211.jar:/opt/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper/bin/../lib/jline-2.11.jar:/opt/zookeeper/bin/../lib/jetty-util-6.1.26.jar:/opt/zookeeper/bin/../lib/jetty-6.1.26.jar:/opt/zookeeper/bin/../lib/javacc.jar:/opt/zookeeper/bin/../lib/jackson-mapper-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/jackson-core-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/commons-cli-1.2.jar:/opt/zookeeper/bin/../zookeeper-3.5.2-alpha.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf: -Xmx1000m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/zookeeper/bin/../conf/zoo.cfg
[root@zookeeper2 ~]#
[root@zookeeper2 ~]# netstat -tupln | grep java
tcp 0 0 :::2182 :::* LISTEN 2505/java
tcp 0 0 :::33582 :::* LISTEN 2505/java
tcp 0 0 ::ffff:10.158.1.99:2288 :::* LISTEN 2505/java
tcp 0 0 :::8080 :::* LISTEN 2505/java
tcp 0 0 ::ffff:10.158.1.99:3388 :::* LISTEN 2505/java
[root@zookeeper2 ~]#
|
配置Codis Dashboard的配置文件:
【94】
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@codis-server1 codis]# pwd
/data/go_me/src/github.com/CodisLabs/codis
[root@codis-server1 codis]#
[root@codis-server1 codis]# ls -ltr | grep config
-rw-r--r-- 1 root root 1894 Aug 19 13:54 config.ini
[root@codis-server1 codis]#
[root@codis-server1 codis]# cat config.ini | grep -v ‘#‘ | strings
coordinator=zookeeper
zk=10.158.1.98:2181,10.158.1.99:2182
product=test
dashboard_addr=10.158.1.94:18087
password=coids
backend_ping_period=5
session_max_timeout=1800
session_max_bufsize=131072
session_max_pipeline=1024
zk_session_timeout=30000
proxy_id=proxy_1
[root@codis-server1 codis]#
|
【95】的配置:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
[root@codis-server2 codis]# cat config.ini | grep -v ‘#‘ | strings
coordinator=zookeeper
zk=10.158.1.98:2181,10.158.1.99:2182
product=test
dashboard_addr=10.158.1.95:18087
password=
backend_ping_period=5
session_max_timeout=1800
session_max_bufsize=131072
session_max_pipeline=1024
zk_session_timeout=30000
proxy_id=proxy_1
[root@codis-server2 codis]#
|
启动Codis dashboard:
1
2
3
4
5
6
7
|
[root@codis-server1 codis]# bin/codis-config dashboard
2016/08/19 14:00:22 dashboard.go:160: [INFO] dashboard listening on addr: :18087
2016/08/19 14:00:23 dashboard.go:143: [INFO] dashboard node created: /zk/codis/db_test/dashboard, {"addr": "10.158.1.94:18087", "pid": 23586}
2016/08/19 14:00:23 dashboard.go:144: [WARN] ********** Attention **********
2016/08/19 14:00:23 dashboard.go:145: [WARN] You should use `kill {pid}` rather than `kill -9 {pid}` to stop me,
2016/08/19 14:00:23 dashboard.go:146: [WARN] or the node resisted on zk will not be cleaned when I‘m quiting and you must remove it manually
2016/08/19 14:00:23 dashboard.go:147: [WARN] *******************************
|
开另一个会话,你可以查看:
1
2
3
4
|
[root@codis-server1 ~]# netstat -tupln | grep codis
tcp 0 0 :::10086 :::* LISTEN 23586/bin/codis-con
tcp 0 0 :::18087 :::* LISTEN 23586/bin/codis-con
[root@codis-server1 ~]#
|
访问Dashboard:
http://10.158.1.94:18087/admin/
初始化slots:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@codis-server1 ~]# cd /data/go_me/src/github.com/CodisLabs/codis/
[root@codis-server1 codis]# bin/codis-config slot help
usage:
codis-config slot init [-f]
codis-config slot info <slot_id>
codis-config slot set <slot_id> <group_id> <status>
codis-config slot range-set <slot_from> <slot_to> <group_id> <status>
codis-config slot migrate <slot_from> <slot_to> <group_id> [--delay=<delay_time_in_ms>]
codis-config slot rebalance [--delay=<delay_time_in_ms>]
[root@codis-server1 codis]#
[root@codis-server1 codis]# bin/codis-config slot init
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]#
|
通过管理WEB页面,查看SLOT信息:
http://10.158.1.94:18087/slots
可以看到,当前,状态都是:OFF。
启动Codis Server。
配置redis配置文件:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
|
[root@codis-server1 codis]# pwd
/data/go_me/src/github.com/CodisLabs/codis
[root@codis-server1 codis]#
[root@codis-server1 codis]# ls -ltr extern/redis-2.8.21/ | grep --color redis
-rw-r--r-- 1 root root 36142 Aug 19 12:10 redis.conf
[root@codis-server1 codis]#
[root@codis-server1 codis]# cat extern/redis-2.8.21/redis.conf | grep -v ‘#‘ | strings
daemonize yes
pidfile /var/run/redis.pid
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
[root@codis-server1 codis]#
|
(以上配置为默认配置,并没有做出什么改动。)
启动Redis:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
[root@codis-server1 codis]# ps faux | grep -v grep | grep --color codis
avahi 1365 0.0 0.0 27772 1680 ? S 00:16 0:00 avahi-daemon: running [codis-server1.local]
root 23586 0.9 0.1 320608 23172 pts/0 Sl+ 14:00 0:17 \_ bin/codis-config dashboard
[root@codis-server1 codis]#
[root@codis-server1 codis]# bin/codis-server extern/redis-2.8.21/redis.conf
[root@codis-server1 codis]#
[root@codis-server1 codis]# ps faux | grep -v grep | grep --color codis
avahi 1365 0.0 0.0 27772 1680 ? S 00:16 0:00 avahi-daemon: running [codis-server1.local]
root 23586 0.9 0.1 320608 23180 pts/0 Sl+ 14:00 0:17 \_ bin/codis-config dashboard
root 24552 7.3 0.0 129784 3868 ? Ssl 14:32 0:00 bin/codis-server *:6379
[root@codis-server1 codis]#
[root@codis-server1 codis]# cat /var/run/redis.pid
24552
[root@codis-server1 codis]#
[root@codis-server1 codis]# netstat -tupln | grep codis-se
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 24552/bin/codis-ser
tcp 0 0 :::6379 :::* LISTEN 24552/bin/codis-ser
[root@codis-server1 codis]#
|
检查状态:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
|
[root@codis-server1 codis]# extern/redis-2.8.21/src/redis-cli -p 6379 info
# Server
redis_version:2.8.21
redis_git_sha1:1b40d080
redis_git_dirty:0
redis_build_id:bbb713f7faaaae10
redis_mode:standalone
os:Linux 2.6.32-279.el6.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.7
process_id:24552
run_id:774a1cfc80934036409d0a4705f2adb384b2a14a
tcp_port:6379
uptime_in_seconds:157
uptime_in_days:0
hz:10
lru_clock:11970711
config_file:/data/go_me/src/github.com/CodisLabs/codis/extern/redis-2.8.21/redis.conf
# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:2723448
used_memory_human:2.60M
used_memory_rss:3993600
used_memory_peak:2723448
used_memory_peak_human:2.60M
used_memory_lua:36864
mem_fragmentation_ratio:1.47
mem_allocator:libc
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1471588346
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
# Stats
total_connections_received:1
total_commands_processed:0
instantaneous_ops_per_sec:0
total_net_input_bytes:14
total_net_output_bytes:0
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:0.22
used_cpu_user:0.17
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Keyspace
[root@codis-server1 codis]#
|
添加Redis Server Group。
编辑配置文件:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
|
[root@codis-server1 codis]# ls -ltr /config/codis/
total 12
-rw-r--r-- 1 root root 1175 Aug 20 16:02 redis_server_group_1.conf
-rw-r--r-- 1 root root 1175 Aug 20 16:03 redis_server_group_3.conf
-rw-r--r-- 1 root root 1175 Aug 20 16:03 redis_server_group_2.conf
[root@codis-server1 codis]#
[root@codis-server1 codis]# cat /config/codis/redis_server_group_1.conf
daemonize yes
pidfile /var/run/redis_server_group_1.pid
port 6381
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump_server_group_1.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly_server_group_1.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
[root@codis-server1 codis]#
[root@codis-server1 codis]# cat /config/codis/redis_server_group_2.conf
daemonize yes
pidfile /var/run/redis_server_group_2.pid
port 6382
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump_server_group_2.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly_server_group_2.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
[root@codis-server1 codis]#
[root@codis-server1 codis]# cat /config/codis/redis_server_group_3.conf
daemonize yes
pidfile /var/run/redis_server_group_3.pid
port 6383
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump_server_group_3.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly_server_group_3.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
[root@codis-server1 codis]#
|
启动多个服务:
/data/go_me/src/github.com/CodisLabs/codis/bin/codis-server /config/codis/redis_server_group_1.conf
/data/go_me/src/github.com/CodisLabs/codis/bin/codis-server /config/codis/redis_server_group_2.conf
/data/go_me/src/github.com/CodisLabs/codis/bin/codis-server /config/codis/redis_server_group_3.conf
运行后,服务状态:
1
2
3
4
5
6
7
8
9
|
[root@codis-server1 codis]# ps faux | grep -v grep | grep --color codis
avahi 1365 0.0 0.0 27772 1680 ? S Aug19 0:01 avahi-daemon: running [codis-server1.local]
root 5751 0.5 0.1 394572 32288 pts/1 Sl+ 19:48 0:09 | \_ bin/codis-proxy -c config.ini -L /var/log/codis-proxy.log --cpu=2 --addr=0.0.0.0:19000 --http-addr=0.0.0.0:11000
root 5369 0.6 0.2 322180 40708 pts/0 Sl 18:49 0:33 | \_ bin/codis-config -c config.ini dashboard
root 24552 0.1 0.0 130980 4092 ? Ssl Aug19 2:22 bin/codis-server *:6379
root 5149 0.1 0.0 131944 4184 ? Ssl 17:34 0:13 bin/codis-server *:6381
root 5153 0.0 0.0 131888 4152 ? Ssl 17:34 0:07 bin/codis-server *:6382
root 5157 0.1 0.0 131944 6020 ? Ssl 17:35 0:15 bin/codis-server *:6383
[root@codis-server1 codis]#
|
【codis-server1】与【codis-server2】都需要如上配置。
添加到server group:
组一:
主 – codis-server1:6379
辅 – codis-server2:6381
组二:
主 – codis-server2:6379
辅 – codis-server2:6382 / codis-server1:6382
组三:
主 – codis-server1:6383
辅 – codis-server2:6383 / codis-server1:6381
开始配置:
配置前:
1
2
3
|
[root@codis-server1 codis]# bin/codis-config server list
null
[root@codis-server1 codis]#
|
组一:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
[root@codis-server1 codis]# echo "Group 1"
Group 1
[root@codis-server1 codis]#
[root@codis-server1 codis]# bin/codis-config server add 1 codis-server1:6379 master
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]# bin/codis-config server add 1 codis-server2:6381 slave
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]#
[root@codis-server1 codis]# bin/codis-config server list
[
{
"id": 1,
"product_name": "test",
"servers": [
{
"addr": "codis-server2:6381",
"group_id": 1,
"type": "slave"
},
{
"addr": "codis-server1:6379",
"group_id": 1,
"type": "master"
}
]
}
]
[root@codis-server1 codis]#
|
组二:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
|
[root@codis-server1 codis]# bin/codis-config server add 2 codis-server2:6379 master
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]# bin/codis-config server add 2 codis-server1:6382 slave
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]# bin/codis-config server add 2 codis-server2:6382 slave
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]#
[root@codis-server1 codis]# bin/codis-config server list
[
{
"id": 1,
"product_name": "test",
"servers": [
{
"addr": "codis-server2:6381",
"group_id": 1,
"type": "slave"
},
{
"addr": "codis-server1:6379",
"group_id": 1,
"type": "master"
}
]
},
{
"id": 2,
"product_name": "test",
"servers": [
{
"addr": "codis-server2:6379",
"group_id": 2,
"type": "master"
},
{
"addr": "codis-server2:6382",
"group_id": 2,
"type": "slave"
},
{
"addr": "codis-server1:6382",
"group_id": 2,
"type": "slave"
}
]
}
]
[root@codis-server1 codis]#
|
组三:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
|
[root@codis-server1 codis]# bin/codis-config server add 3 codis-server1:6383 master
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]# bin/codis-config server add 3 codis-server1:6381 slave
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]# bin/codis-config server add 3 codis-server2:6383 slave
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]#
[root@codis-server1 codis]# bin/codis-config server list
[
{
"id": 1,
"product_name": "test",
"servers": [
{
"addr": "codis-server2:6381",
"group_id": 1,
"type": "slave"
},
{
"addr": "codis-server1:6379",
"group_id": 1,
"type": "master"
}
]
},
{
"id": 3,
"product_name": "test",
"servers": [
{
"addr": "codis-server2:6383",
"group_id": 3,
"type": "slave"
},
{
"addr": "codis-server1:6381",
"group_id": 3,
"type": "slave"
},
{
"addr": "codis-server1:6383",
"group_id": 3,
"type": "master"
}
]
},
{
"id": 2,
"product_name": "test",
"servers": [
{
"addr": "codis-server2:6379",
"group_id": 2,
"type": "master"
},
{
"addr": "codis-server2:6382",
"group_id": 2,
"type": "slave"
},
{
"addr": "codis-server1:6382",
"group_id": 2,
"type": "slave"
}
]
}
]
[root@codis-server1 codis]#
|
配置好后,查看状态:
【codis-server1】
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
|
[root@codis-server1 codis]# extern/redis-2.8.21/src/redis-cli -p 6379 info replication
# Replication
role:master
connected_slaves:2
slave0:ip=10.158.1.95,port=6379,state=online,offset=1373,lag=0
slave1:ip=10.158.1.95,port=6381,state=online,offset=1373,lag=0
master_repl_offset:1373
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:1372
[root@codis-server1 codis]#
[root@codis-server1 codis]#
[root@codis-server1 codis]# extern/redis-2.8.21/src/redis-cli -p 6381 info replication
# Replication
role:slave
master_host:codis-server1
master_port:6383
master_link_status:down
master_last_io_seconds_ago:-1
master_sync_in_progress:0
slave_repl_offset:1
master_link_down_since_seconds:1471691816
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:1834
[root@codis-server1 codis]#
[root@codis-server1 codis]# extern/redis-2.8.21/src/redis-cli -p 6382 info replication
# Replication
role:slave
master_host:codis-server2
master_port:6379
master_link_status:up
master_last_io_seconds_ago:7
master_sync_in_progress:0
slave_repl_offset:393
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
[root@codis-server1 codis]#
[root@codis-server1 codis]# extern/redis-2.8.21/src/redis-cli -p 6383 info replication
# Replication
role:slave
master_host:localhost
master_port:6381
master_link_status:down
master_last_io_seconds_ago:-1
master_sync_in_progress:0
slave_repl_offset:1
master_link_down_since_seconds:197
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
[root@codis-server1 codis]#
[root@codis-server1 codis]# extern/redis-2.8.21/src/redis-cli -p 6384 info replication
Could not connect to Redis at 127.0.0.1:6384: Connection refused
[root@codis-server1 codis]#
|
【codis-server2】
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
|
[root@codis-server2 codis]# extern/redis-2.8.21/src/redis-cli -p 6379 info replication
# Replication
role:slave
master_host:10.158.1.94
master_port:6379
master_link_status:up
master_last_io_seconds_ago:4
master_sync_in_progress:0
slave_repl_offset:1499
slave_priority:100
slave_read_only:1
connected_slaves:2
slave0:ip=10.158.1.94,port=6382,state=online,offset=505,lag=1
slave1:ip=10.158.1.95,port=6382,state=online,offset=505,lag=0
master_repl_offset:505
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:504
[root@codis-server2 codis]#
[root@codis-server2 codis]# extern/redis-2.8.21/src/redis-cli -p 6381 info replication
# Replication
role:slave
master_host:codis-server1
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:1513
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
[root@codis-server2 codis]#
[root@codis-server2 codis]# extern/redis-2.8.21/src/redis-cli -p 6382 info replication
# Replication
role:slave
master_host:codis-server2
master_port:6379
master_link_status:up
master_last_io_seconds_ago:3
master_sync_in_progress:0
slave_repl_offset:519
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
[root@codis-server2 codis]#
[root@codis-server2 codis]# extern/redis-2.8.21/src/redis-cli -p 6383 info replication
# Replication
role:slave
master_host:codis-server1
master_port:6383
master_link_status:down
master_last_io_seconds_ago:-1
master_sync_in_progress:0
slave_repl_offset:1
master_link_down_since_seconds:1471692056
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
[root@codis-server2 codis]#
|
在zookeeper端,也会有关于server group的配置信息:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
|
[root@zookeeper1 zookeeper]# bin/zkCli.sh -server 127.0.0.1:2181
/usr/bin/java
Connecting to 127.0.0.1:2181
2016-08-20 19:26:37,650 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.5.2-alpha-1750793, built on 06/30/2016 13:15 GMT
2016-08-20 19:26:37,655 [myid:] - INFO [main:Environment@109] - Client environment:host.name=zookeeper1
2016-08-20 19:26:37,655 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.8.0_101
2016-08-20 19:26:37,659 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2016-08-20 19:26:37,659 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64/jre
2016-08-20 19:26:37,659 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/opt/zookeeper/bin/../lib/servlet-api-2.5-20081211.jar:/opt/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper/bin/../lib/jline-2.11.jar:/opt/zookeeper/bin/../lib/jetty-util-6.1.26.jar:/opt/zookeeper/bin/../lib/jetty-6.1.26.jar:/opt/zookeeper/bin/../lib/javacc.jar:/opt/zookeeper/bin/../lib/jackson-mapper-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/jackson-core-asl-1.9.11.jar:/opt/zookeeper/bin/../lib/commons-cli-1.2.jar:/opt/zookeeper/bin/../zookeeper-3.5.2-alpha.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf:
2016-08-20 19:26:37,659 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2016-08-20 19:26:37,659 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2016-08-20 19:26:37,659 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA>
2016-08-20 19:26:37,660 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux
2016-08-20 19:26:37,660 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64
2016-08-20 19:26:37,660 [myid:] - INFO [main:Environment@109] - Client environment:os.version=2.6.32-279.el6.x86_64
2016-08-20 19:26:37,660 [myid:] - INFO [main:Environment@109] - Client environment:user.name=root
2016-08-20 19:26:37,660 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/root
2016-08-20 19:26:37,661 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/opt/zookeeper
2016-08-20 19:26:37,661 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=236MB
2016-08-20 19:26:37,666 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=247MB
2016-08-20 19:26:37,666 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=241MB
2016-08-20 19:26:37,670 [myid:] - INFO [main:ZooKeeper@855] - Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@51521cc1
Welcome to ZooKeeper!
2016-08-20 19:26:37,722 [myid:127.0.0.1:2181] - INFO [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2016-08-20 19:26:37,921 [myid:127.0.0.1:2181] - INFO [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:36384, server: 127.0.0.1/127.0.0.1:2181
[zk: 127.0.0.1:2181(CONNECTING) 0] 2016-08-20 19:26:38,022 [myid:127.0.0.1:2181] - INFO [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:2181, sessionid = 0x10007afa5ca0011, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: 127.0.0.1:2181(CONNECTED) 0] ls /zk/codis/db_test/servers
[group_1, group_2, group_3]
[zk: 127.0.0.1:2181(CONNECTED) 1]
|
通过Codis Dashboard查看:
设置Server Group的slot范围。
可以先初始化slot信息,如果以前做过初始化,则需要强制初始化:
1
2
3
4
5
6
|
[root@codis-server1 codis]# bin/codis-config slot init -f
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]#
|
初始化完成后,就可以访问dashboard查看了:
http://10.158.1.94:18087/slots
注意上图中,红框高亮部分的编号信息。
slot默认初始化1024个,即:0~1023。(有时候也会有所不同。)
开始划分范围:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@codis-server1 codis]# bin/codis-config slot range-set 0 500 1 online
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]# bin/codis-config slot range-set 501 702 2 online
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]# bin/codis-config slot range-set 703 1023 3 online
{
"msg": "OK",
"ret": 0
}
[root@codis-server1 codis]#
|
范围划分好后,dashboard的呈现有所变化:
如果选中其中某个范围,会在该页顶端显示其对应server group的详细信息:
启动:Codis Proxy。
命令:
bin/codis-proxy -c config.ini -L /var/log/codis-proxy.log –cpu=2 –addr=0.0.0.0:19000 –http-addr=0.0.0.0:11000 &
1
2
3
4
5
6
7
8
|
[root@codis-server1 codis]# bin/codis-proxy -c config.ini -L /var/log/codis-proxy.log --cpu=2 --addr=0.0.0.0:19000 --http-addr=0.0.0.0:11000
_____ ____ ____/ / (_) _____
/ ___/ / __ \ / __ / / / / ___/
/ /__ / /_/ / / /_/ / / / (__ )
\___/ \____/ \__,_/ /_/ /____/
(... ... 等待输出。)
|
查看Proxy的状态:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
[root@codis-server1 ~]# cd /data/go_me/src/github.com/CodisLabs/codis/
[root@codis-server1 codis]# bin/codis-config proxy help
usage:
codis-config proxy list
codis-config proxy offline <proxy_name>
codis-config proxy online <proxy_name>
[root@codis-server1 codis]#
[root@codis-server1 codis]# bin/codis-config proxy list
[
{
"addr": "codis-server1:19000",
"debug_var_addr": "codis-server1:11000",
"description": "",
"id": "proxy_1",
"last_event": "",
"last_event_ts": 0,
"pid": 5751,
"start_at": "2016-08-20 19:48:56.526639956 +0800 CST",
"state": "online"
}
]
[root@codis-server1 codis]#
|
这时候,通过dashboard也能够看到Proxy的状态:
http://10.158.1.94:18087/admin/
Proxy Debug:
http://10.158.1.94:11000/debug/vars
这样,Codis集群,就搭建完成了。
三、测试。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
[root@codis-server1 codis]# extern/redis-2.8.21/src/redis-cli -h codis-server1 -p 6379
codis-server1:6379> ping
PONG
codis-server1:6379> set name "Angela Baby"
OK
codis-server1:6379> get name
"Angela Baby"
codis-server1:6379>
codis-server1:6379> quit
[root@codis-server1 codis]#
[root@codis-server1 codis]#
[root@codis-server1 codis]# extern/redis-2.8.21/src/redis-cli -h codis-server2 -p 6381
codis-server2:6381> get name
"Angela Baby"
codis-server2:6381>
codis-server2:6381> quit
[root@codis-server1 codis]#
|
可以看到,数据自动同步。
——————————
Done。
以上是关于codis+redis 集群搭建管理的主要内容,如果未能解决你的问题,请参考以下文章