mysql-group-replication 测试环境的搭建与排错
Posted 蒋乐兴的技术随笔
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了mysql-group-replication 测试环境的搭建与排错相关的知识,希望对你有一定的参考价值。
mysql-group-replication 是由mysql-5.7.17这个版本提供的强一致的高可用集群解决方案
1、环境规划
主机ip 主机名 172.16.192.201 balm001 172.16.192.202 balm002 172.16.192.203 balm003
2、mysql的配置文件
balm001的配置如下:
[mysql] auto-rehash [mysqld] ####: for global user =mysql # mysql basedir =/usr/local/mysql # /usr/local/mysql/ datadir =/usr/local/mysql_datas/3306 # /usr/local/mysql/data server_id =1 # 0 port =3306 # 3306 character_set_server =utf8 # latin1 log_timestamps =system # utc socket =/tmp/mysql.sock # /tmp/mysql.sock read_only =1 # off skip-slave-start =1 # auto_increment_increment =1 # 1 auto_increment_offset =1 # 1 lower_case_table_names =1 # 0 secure_file_priv = # null ####: for binlog binlog_format =row # row log_bin =mysql-bin # off binlog_rows_query_log_events =on # off log_slave_updates =on # off expire_logs_days =4 # 0 binlog_cache_size =32768 # 32768(32k) binlog_checksum =none # CRC32 sync_binlog =1 # 1 ####: for error-log log_error =error.log # /usr/local/mysql/data/localhost.localdomain.err ####: for slow query log ####: for gtid gtid_executed_compression_period =1000 # 1000 gtid_mode =on # off enforce_gtid_consistency =on # off ####: for replication master_info_repository =table # file relay_log_info_repository =table # file ####: for group replication transaction_write_set_extraction =XXHASH64 # off loose-group_replication_group_name ="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" # loose-group_replication_start_on_boot =off # off loose-group_replication_local_address ="172.16.192.201:24901" # loose-group_replication_group_seeds ="172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901" loose-group_replication_bootstrap_group =off # off ####: for innodb default_storage_engine =innodb # innodb default_tmp_storage_engine =innodb # innodb innodb_data_file_path =ibdata1:12M:autoextend # ibdata1:12M:autoextend innodb_temp_data_file_path =ibtmp1:12M:autoextend # ibtmp1:12M:autoextend innodb_buffer_pool_filename =ib_buffer_pool # ib_buffer_pool innodb_log_group_home_dir =./ # ./ innodb_log_files_in_group =2 # 2 innodb_log_file_size =48M # 50331648(48M) innodb_file_format =Barracuda # Barracuda innodb_file_per_table =on # on innodb_page_size =16k # 16384(16k) innodb_thread_concurrency =0 # 0 innodb_read_io_threads =4 # 4 innodb_write_io_threads =4 # 4 innodb_purge_threads =4 # 4 innodb_print_all_deadlocks =on # off innodb_deadlock_detect =on # on innodb_lock_wait_timeout =50 # 50 innodb_spin_wait_delay =6 # 6 innodb_autoinc_lock_mode =2 # 1 innodb_stats_persistent =on # on innodb_stats_persistent_sample_pages =20 # 20 innodb_buffer_pool_instances =1 # 1 innodb_adaptive_hash_index =on # on innodb_change_buffering =all # all innodb_change_buffer_max_size =25 # 25 innodb_flush_neighbors =1 # 1 innodb_flush_method =O_DIRECT # innodb_doublewrite =on # on innodb_log_buffer_size =16M # 16777216(16M) innodb_flush_log_at_timeout =1 # 1 innodb_flush_log_at_trx_commit =1 # 1 innodb_buffer_pool_size =134217728 # 134217728(128M) autocommit =1 # 1 #### for performance_schema performance_schema =on # on performance_schema_consumer_events_stages_current =on # off performance_schema_consumer_events_stages_history =on # off performance_schema_consumer_events_stages_history_long =off # off performance_schema_consumer_statements_digest =on # on performance_schema_consumer_events_statements_current =on # on performance_schema_consumer_events_statements_history =on # on performance_schema_consumer_events_statements_history_long =off # off performance_schema_consumer_events_waits_current =on # off performance_schema_consumer_events_waits_history =on # off performance_schema_consumer_events_waits_history_long =off # off performance_schema_consumer_global_instrumentation =on # on performance_schema_consumer_thread_instrumentation =on # on
balm002的配置如下:
[mysql] auto-rehash [mysqld] ####: for global user =mysql # mysql basedir =/usr/local/mysql # /usr/local/mysql/ datadir =/usr/local/mysql_datas/3306 # /usr/local/mysql/data server_id =2 # 0 port =3306 # 3306 character_set_server =utf8 # latin1 log_timestamps =system # utc socket =/tmp/mysql.sock # /tmp/mysql.sock read_only =1 # off skip-slave-start =1 # auto_increment_increment =1 # 1 auto_increment_offset =1 # 1 lower_case_table_names =1 # 0 secure_file_priv = # null ####: for binlog binlog_format =row # row log_bin =mysql-bin # off binlog_rows_query_log_events =on # off log_slave_updates =on # off expire_logs_days =4 # 0 binlog_cache_size =32768 # 32768(32k) binlog_checksum =none # CRC32 sync_binlog =1 # 1 ####: for error-log log_error =error.log # /usr/local/mysql/data/localhost.localdomain.err ####: for slow query log ####: for gtid gtid_executed_compression_period =1000 # 1000 gtid_mode =on # off enforce_gtid_consistency =on # off ####: for replication master_info_repository =table # file relay_log_info_repository =table # file ####: for group replication transaction_write_set_extraction =XXHASH64 # off loose-group_replication_group_name ="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" # loose-group_replication_start_on_boot =off # off loose-group_replication_local_address ="172.16.192.202:24901" # loose-group_replication_group_seeds ="172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901" loose-group_replication_bootstrap_group =off # off ####: for innodb default_storage_engine =innodb # innodb default_tmp_storage_engine =innodb # innodb innodb_data_file_path =ibdata1:12M:autoextend # ibdata1:12M:autoextend innodb_temp_data_file_path =ibtmp1:12M:autoextend # ibtmp1:12M:autoextend innodb_buffer_pool_filename =ib_buffer_pool # ib_buffer_pool innodb_log_group_home_dir =./ # ./ innodb_log_files_in_group =2 # 2 innodb_log_file_size =48M # 50331648(48M) innodb_file_format =Barracuda # Barracuda innodb_file_per_table =on # on innodb_page_size =16k # 16384(16k) innodb_thread_concurrency =0 # 0 innodb_read_io_threads =4 # 4 innodb_write_io_threads =4 # 4 innodb_purge_threads =4 # 4 innodb_print_all_deadlocks =on # off innodb_deadlock_detect =on # on innodb_lock_wait_timeout =50 # 50 innodb_spin_wait_delay =6 # 6 innodb_autoinc_lock_mode =2 # 1 innodb_stats_persistent =on # on innodb_stats_persistent_sample_pages =20 # 20 innodb_buffer_pool_instances =1 # 1 innodb_adaptive_hash_index =on # on innodb_change_buffering =all # all innodb_change_buffer_max_size =25 # 25 innodb_flush_neighbors =1 # 1 innodb_flush_method =O_DIRECT # innodb_doublewrite =on # on innodb_log_buffer_size =16M # 16777216(16M) innodb_flush_log_at_timeout =1 # 1 innodb_flush_log_at_trx_commit =1 # 1 innodb_buffer_pool_size =134217728 # 134217728(128M) autocommit =1 # 1 #### for performance_schema performance_schema =on # on performance_schema_consumer_events_stages_current =on # off performance_schema_consumer_events_stages_history =on # off performance_schema_consumer_events_stages_history_long =off # off performance_schema_consumer_statements_digest =on # on performance_schema_consumer_events_statements_current =on # on performance_schema_consumer_events_statements_history =on # on performance_schema_consumer_events_statements_history_long =off # off performance_schema_consumer_events_waits_current =on # off performance_schema_consumer_events_waits_history =on # off performance_schema_consumer_events_waits_history_long =off # off performance_schema_consumer_global_instrumentation =on # on performance_schema_consumer_thread_instrumentation =on # on
balm003的配置如下:
[mysql] auto-rehash [mysqld] ####: for global user =mysql # mysql basedir =/usr/local/mysql # /usr/local/mysql/ datadir =/usr/local/mysql_datas/3306 # /usr/local/mysql/data server_id =3 # 0 port =3306 # 3306 character_set_server =utf8 # latin1 log_timestamps =system # utc socket =/tmp/mysql.sock # /tmp/mysql.sock read_only =1 # off skip-slave-start =1 # auto_increment_increment =1 # 1 auto_increment_offset =1 # 1 lower_case_table_names =1 # 0 secure_file_priv = # null ####: for binlog binlog_format =row # row log_bin =mysql-bin # off binlog_rows_query_log_events =on # off log_slave_updates =on # off expire_logs_days =4 # 0 binlog_cache_size =32768 # 32768(32k) binlog_checksum =none # CRC32 sync_binlog =1 # 1 ####: for error-log log_error =error.log # /usr/local/mysql/data/localhost.localdomain.err ####: for slow query log ####: for gtid gtid_executed_compression_period =1000 # 1000 gtid_mode =on # off enforce_gtid_consistency =on # off ####: for replication master_info_repository =table # file relay_log_info_repository =table # file ####: for group replication transaction_write_set_extraction =XXHASH64 # off loose-group_replication_group_name ="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" # loose-group_replication_start_on_boot =off # off loose-group_replication_local_address ="172.16.192.203:24901" # loose-group_replication_group_seeds ="172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901" loose-group_replication_bootstrap_group =off # off ####: for innodb default_storage_engine =innodb # innodb default_tmp_storage_engine =innodb # innodb innodb_data_file_path =ibdata1:12M:autoextend # ibdata1:12M:autoextend innodb_temp_data_file_path =ibtmp1:12M:autoextend # ibtmp1:12M:autoextend innodb_buffer_pool_filename =ib_buffer_pool # ib_buffer_pool innodb_log_group_home_dir =./ # ./ innodb_log_files_in_group =2 # 2 innodb_log_file_size =48M # 50331648(48M) innodb_file_format =Barracuda # Barracuda innodb_file_per_table =on # on innodb_page_size =16k # 16384(16k) innodb_thread_concurrency =0 # 0 innodb_read_io_threads =4 # 4 innodb_write_io_threads =4 # 4 innodb_purge_threads =4 # 4 innodb_print_all_deadlocks =on # off innodb_deadlock_detect =on # on innodb_lock_wait_timeout =50 # 50 innodb_spin_wait_delay =6 # 6 innodb_autoinc_lock_mode =2 # 1 innodb_stats_persistent =on # on innodb_stats_persistent_sample_pages =20 # 20 innodb_buffer_pool_instances =1 # 1 innodb_adaptive_hash_index =on # on innodb_change_buffering =all # all innodb_change_buffer_max_size =25 # 25 innodb_flush_neighbors =1 # 1 innodb_flush_method =O_DIRECT # innodb_doublewrite =on # on innodb_log_buffer_size =16M # 16777216(16M) innodb_flush_log_at_timeout =1 # 1 innodb_flush_log_at_trx_commit =1 # 1 innodb_buffer_pool_size =134217728 # 134217728(128M) autocommit =1 # 1 #### for performance_schema performance_schema =on # on performance_schema_consumer_events_stages_current =on # off performance_schema_consumer_events_stages_history =on # off performance_schema_consumer_events_stages_history_long =off # off performance_schema_consumer_statements_digest =on # on performance_schema_consumer_events_statements_current =on # on performance_schema_consumer_events_statements_history =on # on performance_schema_consumer_events_statements_history_long =off # off performance_schema_consumer_events_waits_current =on # off performance_schema_consumer_events_waits_history =on # off performance_schema_consumer_events_waits_history_long =off # off performance_schema_consumer_global_instrumentation =on # on performance_schema_consumer_thread_instrumentation =on # on
3、对balm001进行group-replication 的配置(我要用balm001这台机器做集群的seed结点)
set sql_log_bin=0 create user [email protected]‘%‘ identified by ‘123456‘; grant replication slave,replication client on *.* to [email protected]‘%‘; create user [email protected]‘127.0.0.1‘ identified by ‘123456‘; grant replication slave,replication client on *.* to [email protected]‘127.0.0.1‘; create user [email protected]‘localhost‘ identified by ‘123456‘; grant replication slave,replication client on *.* to [email protected]‘localhost‘; set sql_log_bin=1; change master to master_user=‘rpl_user‘, master_password=‘123456‘ for channel ‘group_replication_recovery‘; install plugin group_replication soname ‘group_replication.so‘; set global group_replication_bootstrap_group=on; start group_replication; set global group_replication_bootstrap_group=off;
4、配置balm002 & balm003
set sql_log_bin=0 create user [email protected]‘%‘ identified by ‘123456‘; grant replication slave,replication client on *.* to [email protected]‘%‘; create user [email protected]‘127.0.0.1‘ identified by ‘123456‘; grant replication slave,replication client on *.* to [email protected]‘127.0.0.1‘; create user [email protected]‘localhost‘ identified by ‘123456‘; grant replication slave,replication client on *.* to [email protected]‘localhost‘; set sql_log_bin=1; change master to master_user=‘rpl_user‘, master_password=‘123456‘ for channel ‘group_replication_recovery‘; install plugin group_replication soname ‘group_replication.so‘; #非seed结点直接start group_replication 就行 start group_replication;
5、检查mysql group-replication 是否配置成功
select * from replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 2263e491-05ad-11e7-b3ef-000c29c965ef | balm001 | 3306 | ONLINE | | group_replication_applier | 441db987-0653-11e7-9d42-000c2922addb | balm003 | 3306 | ONLINE | | group_replication_applier | 49ed2458-05b0-11e7-91af-000c29cac83b | balm002 | 3306 | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+
MEMBER_STATE列都是ONLINE说明集群状态是正常的
6、在配置的过程中日志如下
2017-03-30T21:48:57.781612+08:00 4 [Note] Plugin group_replication reported: ‘Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"‘ 2017-03-30T21:48:57.781849+08:00 4 [Note] Plugin group_replication reported: ‘[GCS] Added automatically IP ranges 127.0.0.1/8,172.16.192.202/16 to the whitelist‘ 2017-03-30T21:48:57.782666+08:00 4 [Note] Plugin group_replication reported: ‘[GCS] SSL was not enabled‘ 2017-03-30T21:48:57.782703+08:00 4 [Note] Plugin group_replication reported: ‘Initialized group communication with configuration: group_replication_group_name: "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"; group_replication_local_address: "172.16.192.202:24901"; group_replication_group_seeds: "172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901"; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "AUTOMATIC"‘ 2017-03-30T21:48:57.783601+08:00 6 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_applier‘ executed‘. Previous state master_host=‘<NULL>‘, master_port= 0, master_log_file=‘‘, master_log_pos= 4, master_bind=‘‘. New state master_host=‘<NULL>‘, master_port= 0, master_log_file=‘‘, master_log_pos= 4, master_bind=‘‘. 2017-03-30T21:48:57.807438+08:00 4 [Note] Plugin group_replication reported: ‘Group Replication applier module successfully initialized!‘ 2017-03-30T21:48:57.807513+08:00 4 [Note] Plugin group_replication reported: ‘auto_increment_increment is set to 7‘ 2017-03-30T21:48:57.807522+08:00 4 [Note] Plugin group_replication reported: ‘auto_increment_offset is set to 2‘ 2017-03-30T21:48:57.807902+08:00 9 [Note] Slave SQL thread for channel ‘group_replication_applier‘ initialized, starting replication in log ‘FIRST‘ at position 0, relay log ‘./balm002-relay-bin-group_replication_applier.000001‘ position: 4 2017-03-30T21:48:57.812977+08:00 0 [Note] Plugin group_replication reported: ‘state 0 action xa_init‘ 2017-03-30T21:48:57.835709+08:00 0 [Note] Plugin group_replication reported: ‘Successfully bound to 0.0.0.0:24901 (socket=46).‘ 2017-03-30T21:48:57.835753+08:00 0 [Note] Plugin group_replication reported: ‘Successfully set listen backlog to 32 (socket=46)!‘ 2017-03-30T21:48:57.835759+08:00 0 [Note] Plugin group_replication reported: ‘Successfully unblocked socket (socket=46)!‘ 2017-03-30T21:48:57.835792+08:00 0 [Note] Plugin group_replication reported: ‘connecting to 172.16.192.202 24901‘ 2017-03-30T21:48:57.835918+08:00 0 [Note] Plugin group_replication reported: ‘client connected to 172.16.192.202 24901 fd 47‘ 2017-03-30T21:48:57.835962+08:00 0 [Note] Plugin group_replication reported: ‘Ready to accept incoming connections on 0.0.0.0:24901 (socket=46)!‘ 2017-03-30T21:48:57.836055+08:00 0 [Note] Plugin group_replication reported: ‘connecting to 172.16.192.202 24901‘ 2017-03-30T21:48:57.836090+08:00 0 [Note] Plugin group_replication reported: ‘client connected to 172.16.192.202 24901 fd 63‘ 2017-03-30T21:48:57.836159+08:00 0 [Note] Plugin group_replication reported: ‘connecting to 172.16.192.202 24901‘ 2017-03-30T21:48:57.836192+08:00 0 [Note] Plugin group_replication reported: ‘client connected to 172.16.192.202 24901 fd 65‘ 2017-03-30T21:48:57.836255+08:00 0 [Note] Plugin group_replication reported: ‘connecting to 172.16.192.202 24901‘ 2017-03-30T21:48:57.836285+08:00 0 [Note] Plugin group_replication reported: ‘client connected to 172.16.192.202 24901 fd 67‘ 2017-03-30T21:48:57.836350+08:00 0 [Note] Plugin group_replication reported: ‘connecting to 172.16.192.202 24901‘ 2017-03-30T21:48:57.836382+08:00 0 [Note] Plugin group_replication reported: ‘client connected to 172.16.192.202 24901 fd 69‘ 2017-03-30T21:48:57.836460+08:00 0 [Note] Plugin group_replication reported: ‘connecting to 172.16.192.202 24901‘ 2017-03-30T21:48:57.836548+08:00 0 [Note] Plugin group_replication reported: ‘client connected to 172.16.192.202 24901 fd 71‘ 2017-03-30T21:48:57.836621+08:00 0 [Note] Plugin group_replication reported: ‘connecting to 172.16.192.201 24901‘ 2017-03-30T21:48:57.836983+08:00 0 [Note] Plugin group_replication reported: ‘client connected to 172.16.192.201 24901 fd 73‘ 2017-03-30T21:48:59.097399+08:00 0 [Note] Plugin group_replication reported: ‘state 4257 action xa_snapshot‘ 2017-03-30T21:48:59.097702+08:00 0 [Note] Plugin group_replication reported: ‘new state x_recover‘ 2017-03-30T21:48:59.097720+08:00 0 [Note] Plugin group_replication reported: ‘state 4277 action xa_complete‘ 2017-03-30T21:48:59.097829+08:00 0 [Note] Plugin group_replication reported: ‘new state x_run‘ 2017-03-30T21:49:00.103807+08:00 0 [Note] Plugin group_replication reported: ‘Starting group replication recovery with view_id 14908590232397539:7‘ 2017-03-30T21:49:00.105031+08:00 12 [Note] Plugin group_replication reported: ‘Establishing group recovery connection with a possible donor. Attempt 1/10‘ 2017-03-30T21:49:00.125578+08:00 12 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery‘ executed‘. Previous state master_host=‘<NULL>‘, master_port= 0, master_log_file=‘‘, master_log_pos= 4, master_bind=‘‘. New state master_host=‘balm001‘, master_port= 3306, master_log_file=‘‘, master_log_pos= 4, master_bind=‘‘. 2017-03-30T21:49:00.130977+08:00 12 [Note] Plugin group_replication reported: ‘Establishing connection to a group replication recovery donor 2263e491-05ad-11e7-b3ef-000c29c965ef at balm001 port: 3306.‘ 2017-03-30T21:49:00.131254+08:00 14 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the ‘START SLAVE Syntax‘ in the MySQL Manual for more information. 2017-03-30T21:49:00.136503+08:00 14 [Note] Slave I/O thread for channel ‘group_replication_recovery‘: connected to master ‘[email protected]:3306‘,replication started in log ‘FIRST‘ at position 4 2017-03-30T21:49:00.148622+08:00 15 [Note] Slave SQL thread for channel ‘group_replication_recovery‘ initialized, starting replication in log ‘FIRST‘ at position 0, relay log ‘./balm002-relay-bin-group_replication_recovery.000001‘ position: 4 2017-03-30T21:49:00.176671+08:00 12 [Note] Plugin group_replication reported: ‘Terminating existing group replication donor connection and purging the corresponding logs.‘ 2017-03-30T21:49:00.176723+08:00 15 [Note] Slave SQL thread for channel ‘group_replication_recovery‘ exiting, replication stopped in log ‘mysql-bin.000003‘ at position 1563 2017-03-30T21:49:00.177801+08:00 14 [Note] Slave I/O thread killed while reading event for channel ‘group_replication_recovery‘ 2017-03-30T21:49:00.177833+08:00 14 [Note] Slave I/O thread exiting for channel ‘group_replication_recovery‘, read up to log ‘mysql-bin.000003‘, position 1887 2017-03-30T21:49:00.182909+08:00 12 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery‘ executed‘. Previous state master_host=‘balm001‘, master_port= 3306, master_log_file=‘‘, master_log_pos= 4, master_bind=‘‘. New state master_host=‘<NULL>‘, master_port= 0, master_log_file=‘‘, master_log_pos= 4, master_bind=‘‘. 2017-03-30T21:49:00.189188+08:00 0 [Note] Plugin group_replication reported: ‘This server was declared online within the replication group‘
7、在配置中遇到的坑
The START GROUP_REPLICATION command failed as there was an error when initializ ... ...
最张确认这个错是由于dns没有配置引起的、对就的改一下/etc/hosts/就行了;要做到集群中的各个主机之间的dns解析是正常的
[[email protected] Desktop]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.192.22 studio 172.16.192.201 balm001 172.16.192.202 balm002 172.16.192.203 balm003
----
以上是关于mysql-group-replication 测试环境的搭建与排错的主要内容,如果未能解决你的问题,请参考以下文章