tidb同步mysql
Posted 修行从29开始
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了tidb同步mysql相关的知识,希望对你有一定的参考价值。
通过kafka处理
- 下载;
重新进行ansible-playbook bootstrap.yml
两个pump和drainer在下载的resource/bin/下面。拷贝到 deploy/bin/目录下。
- 修改 inventory.ini
设置enable_binlog = True
设置zk地址,不然会报错,一定需要zk
注释原来的#[pump_servers:children]
增加
[pump_servers]
192.xxx.xxx.205
192.xxx.xxx.218
3.滚动升级tidb节点,部署pump
尝试 ansible-playbook rolling_update.yml –tags=tidb
报错如下:
TASK [stop pump by systemd] *******************************************************************************************************************************************************************
fatal: [192.xxx.xxx.205]: FAILED! => {"changed": false, "msg": "Could not find the requested service pump.service: host"}
测试自定义systemd 加入pump
[Unit]
Description=pd service
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
LimitNOFILE=1000000
User=tidb
ExecStart=/home/tidb/deploy2/bin/pump -config pump.toml
Restart=always
RestartSec=15s
[Install]
WantedBy=multi-user.target
配置使用到的pump.toml
# pump Configuration.
# pump 提供服务的 rpc 地址(默认 "127.0.0.1:8250")
addr = "127.0.0.1:8250"
# pump 对外提供服务的 rpc 地址(默认 "127.0.0.1:8250")
advertise-addr = ""
# binlog 最大保留天数 (默认 7), 设置为 0 可永久保存
gc = 3
# pump 数据存储位置路径
data-dir = "data.pump"
# pump 向 pd 发送心跳间隔 (单位 秒)
heartbeat-interval = 3
# pd 集群节点的地址 (默认 "http://127.0.0.1:2379")
pd-urls = "http://192.xxx.xxx.218:2379"
# unix socket 模式服务监听地址 (默认 unix:///tmp/pump.sock)
socket = "unix:///tmp/pump.sock"
#log-file
log-file = "/home/tidb/deploy2/log/pump.log"
发现systemd启动不了,查看日志 ,路径为
/var/log/messages
Nov 20 10:34:12 TEST-1807-V002 pump: 2018/11/20 10:34:12 main.go:23: #033[0;31m[fatal] verifying flags error, open pump.toml: no such file or directory. See ‘pump --help‘.#033[0m
Nov 20 10:34:12 TEST-1807-V002 systemd: pump.service: main process exited, code=exited, status=255/n/a
Nov 20 10:34:12 TEST-1807-V002 systemd: Unit pump.service entered failed state.
Nov 20 10:34:12 TEST-1807-V002 systemd: pump.service failed.
是路径不对:修改后 systemctl daemon-reload
再次重启
[[email protected] ~]# ps -ef|grep pump
tidb 23063 1 0 10:37 ? 00:00:00 /home/tidb/tidb-binlog/bin/pump -configl /home/tidb/depoy/conf/pump.toml
root 23092 21306 0 10:38 pts/1 00:00:00 grep --color=auto pump
可以启动
尝试停止
[[email protected] ~]# systemctl stop pump.service
[[email protected] ~]# ps -ef|grep pump
root 23136 21306 0 10:40 pts/1 00:00:00 grep --color=auto pump
停止成功
其实上面的主要是升级程序能走通,后面升级程序会自己生成脚本,进行部署,deploy.
再次ansible-playbook rolling_update.yml –tags=tidb
PLAY RECAP ************************************************************************************************************************************************************************************
192.xxx.xxx.205 : ok=48 changed=6 unreachable=0 failed=0
192.xxx.xxx.218 : ok=49 changed=12 unreachable=0 failed=0
192.xxx.xxx.219 : ok=8 changed=0 unreachable=0 failed=0
192.xxx.xxx.220 : ok=8 changed=0 unreachable=0 failed=0
192.xxx.xxx.221 : ok=9 changed=0 unreachable=0 failed=0
localhost : ok=1 changed=0 unreachable=0 failed=0
Congrats! All goes well. :-)
审计tidb 节点成功升级,部署pump进程成功。
4.部署drainer
官方文档上写明需要手工部署。
a.配置drainer配置文件:
# drainer Configuration.
# drainer 提供服务的地址(默认 "127.0.0.1:8249")
addr = "192.xxx.xxx.218:8249"
# 向 pd 查询在线 pump 的时间间隔 (默认 10,单位 秒)
detect-interval = 10
# drainer 数据存储位置路径 (默认 "data.drainer")
data-dir = "/home/tidb/deploy2/data.drainer"
#pd-urls = "http://192.xxx.xxx.205:2379,http://192.xxx.xxx.218:2379,http://192.xxx.xxx.219:2379"
# log 文件路径
log-file = "/home/tidb/deploy2/log/drainer.log"
zookeeper-addrs="192.168.110.22:2181"
kafka-addrs="192.xxx.xxx.66:9092,192.xxx.xxx.67:9092"
kafka-version="0.10.2.1"
# Syncer Configuration.
[syncer]
## db 过滤列表 (默认 "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql,test"),
## 不支持对 ignore schemas 的 table 进行 rename DDL 操作
ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql"
# 输出到下游数据库一个事务的 sql 数量 (default 1)
txn-batch = 1
# 同步下游的并发数,该值设置越高同步的吞吐性能越好 (default 1)
worker-count = 1
# 是否禁用拆分单个 binlog 的 sqls 的功能,如果设置为 true,则按照每个 binlog
# 顺序依次还原成单个事务进行同步( 下游服务类型为 mysql, 该项设置为 False )
disable-dispatch = false
# drainer 下游服务类型 (默认为 mysql)
# 参数有效值为 "mysql", "pb"
db-type = "mysql"
# replicate-do-db priority over replicate-do-table if have same db name
# and we support regex expression ,
# 以 ‘~‘ 开始声明使用正则表达式
#replicate-do-db = ["~^b.*","s1"]
#[[syncer.replicate-do-table]]
#db-name ="test"
#tbl-name = "log"
#[[syncer.replicate-do-table]]
#db-name ="test"
#tbl-name = "~^a.*"
# db-type 设置为 mysql 时,下游数据库服务器参数
[syncer.to]
host = "192.168.216.22"
user = "drainer"
password = "123456"
port = 3306
主要需要配置的上面标红的内容。
b.配置systemd
[Unit]
Description=drainer service
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
LimitNOFILE=1000000
User=tidb
ExecStart=/home/tidb/deploy2/bin/drainer -config /home/tidb/deploy2/conf/drainer.toml
Restart=always
RestartSec=15s
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
为了测试效果:
在测试库中新建数据库
MySQL [antiml]> create database caoyf;
Query OK, 0 rows affected (1.04 sec)
MySQL [caoyf]> create table test(id int);
Query OK, 0 rows affected (2.04 sec)
MySQL [caoyf]> insert into test values(1);
Query OK, 1 row affected (0.14 sec)
MySQL [caoyf]> select * from test;
+------+
| id |
+------+
| 1 |
| 2 |
+------+
2 rows in set (0.01 sec)
c. 生成 drainer savepint 文件
/home/tidb/deploy2/bin/drainer -gen-savepoint --data-dir=/home/tidb/deploy2/data.drainer --pd-urls=http://192.xxx.xxx.205:2379,http://192.xxx.xxx.218:2379,http://192.xxx.xxx.219:2379
在drainer配置文件中配置同步的起点tso
查看
为了测试,先全量备份数据
./mydumper -h 192.xxx.xxx.205 -P 4000 -u root -p ‘xxxx‘ -t 4 -F 64 -B caoyf --skip-tz-utc -o /tmp/caoyf
传到对应的mysql机器
在新的库进行恢复
./loader -u root -p xxxxx -P 3306 -d /tmp/caoyf
查看已恢复:
mysql> show tables;
+-----------------+
| Tables_in_caoyf |
+-----------------+
| test |
+-----------------+
1 row in set (0.00 sec)
mysql> select * from test;
+------+
| id |
+------+
| 1 |
| 2 |
+------+
2 rows in set (0.00 sec)
再在tidb插入几条数据:
MySQL [(none)]> use caoyf
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MySQL [caoyf]> insert into test values(3);
Query OK, 1 row affected (0.02 sec)
MySQL [caoyf]> insert into test values(4);
Query OK, 1 row affected (0.02 sec)
d.启动drainer (希望能成功!!!!!)
报错,显示连不到kafka
需要配置pump 进行指定kafka,特别注意各版本的参数不一样,需要仔细查看对应的版本。-help查看参数内容。
后修改/home/tidb/deploy2/scripts/run_pump.sh
exec bin/pump
--gc="5"
--addr="0.0.0.0:8250"
--advertise-addr="192.xxx.xxx.218:8250"
--pd-urls="http://192.xxx.xxx.205:2379,http://192.xxx.xxx.218:2379,http://192.xxx.xxx.219:2379"
--zookeeper-addrs="192.168.110.22:2181"
--data-dir="/home/tidb/deploy2/data.pump"
--socket="/home/tidb/deploy2/status/pump.sock"
--log-file="/home/tidb/deploy2/log/pump.log"
--kafka-addrs="192.xxx.xxx.67:9092,192.xxx.xxx.66:9092"
--zookeeper-addrs="192.168.110.22:2181"
--kafka-version="0.10.2.1"
--config=conf/pump.toml
后启动drainer
systemctl start drainer.service
查看日志:
2018/11/21 15:30:31 config.go:382: [sarama] ClientID is the default of ‘sarama‘, you should consider setting it to something application-specific.
2018/11/21 15:30:31 broker.go:146: [sarama] Connected to broker at 192.xxx.xxx.66:9092 (registered as #1)
2018/11/21 15:30:31 consumer.go:712: [sarama] consumer/broker/1 added subscription to 6626154907404840978_TEST-1807-V003_8250/0
2018/11/21 15:34:11 syncer.go:453: [info] [ddl][start]create database test_caoyf;[commit ts]404431999019253762[pos]{0 908}
2018/11/21 15:34:11 syncer.go:255: [info] [write save point]404431999019253762[positions]map[TEST-1807-V002:8250:{0 908}]
2018/11/21 15:34:11 syncer.go:462: [info] [ddl][end]create database test_caoyf;[commit ts]404431999019253762[pos]{0 908}
2018/11/21 15:35:51 syncer.go:453: [info] [ddl][start]use `test_caoyf`; create table test1(id int);;[commit ts]404432026819624961[pos]{0 944}
2018/11/21 15:35:51 syncer.go:255: [info] [write save point]404432026819624961[positions]map[TEST-1807-V002:8250:{0 944}]
2018/11/21 15:35:51 syncer.go:462: [info] [ddl][end]use `test_caoyf`; create table test1(id int);;[commit ts]404432026819624961[pos]{0 944}
一些测试。日志显示成功,在下游mysql 查看包括建库,见表,简单插入同步没有问题。
以上是关于tidb同步mysql的主要内容,如果未能解决你的问题,请参考以下文章
TIDB - 使用 TiDB Binlog 将日志同步至下游 Kafka 中
TIDB - 使用 TICDC 将数据同步至下游 Mysql 中
Flink 最佳实践之使用 Canal 同步 MySQL 数据至 TiDB