使用mycat做mysql的分库分表读写分离

Posted 常见问题服务平台

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了使用mycat做mysql的分库分表读写分离相关的知识,希望对你有一定的参考价值。

下载mycat

wget /soft/download/mycat.tar.gz http://dl.mycat.io/1.6-RELEASE/Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz


cd /soft/downloadtar -zxvf mycat.tar.gz


启动mycat

cd mycat/bin./mycat start 启动./mycat start 停止./mycat console 控制台启动 ./mycat status 查看启动状态


不配置配置文件的话,是启动不起来的。。。


配置配置文件

配置文件需要提前准备数据库,对应的库和表也要建立起来 如果需要做读写分离的话,需要做好数据库主从复制功能

下面说明一下配置文件需要准备的内容:mysql主数据库:10.201.4.70:3306 mysql从数据库:10.201.4.71:3306

主库建库语句:

 create database if not exists dustdetection1;create database if not exists dustdetection2;create database if not exists dustdetection3;


同时需要在各个库建立对应的表。

server.xml<?xml version="1.0" encoding="UTF-8"?><!-- - - Licensed under the Apache License, Version 2.0 (the "License");- you may not use this file except in compliance with the License. - Youmay obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0- - Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, - WITHOUTWARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See theLicense for the specific language governing permissions and - limitationsunder the License. --><!DOCTYPE mycat:server SYSTEM "server.dtd"><mycat:server xmlns:mycat="http://io.mycat/"><system><property name="useSqlStat">0</property> <!-- 1为开启实时统计、0为关闭 --><property name="useGlobleTableCheck">0</property> <!-- 1为开启全加班一致性检测、0为关闭 -->
<property name="sequnceHandlerType">2</property> <!-- <property name="useCompression">1</property>--> <!--1为开启mysql压缩协议--> <!-- <property name="fakeMySQLVersion">5.6.20</property>--> <!--设置模拟的MySQL版本号--><!-- <property name="processorBufferChunk">40960</property> --><!--<property name="processors">1</property><property name="processorExecutor">32</property> --><!--默认为type 0: DirectByteBufferPool | type 1 ByteBufferArena--><property name="processorBufferPoolType">0</property><!--默认是65535 64K 用于sql解析时最大文本长度 --><!--<property name="maxStringLiteralLength">65535</property>--><!--<property name="sequnceHandlerType">0</property>--><!--<property name="backSocketNoDelay">1</property>--><!--<property name="frontSocketNoDelay">1</property>--><!--<property name="processorExecutor">16</property>--><!--<property name="serverPort">8066</property> <property name="managerPort">9066</property><property name="idleTimeout">300000</property> <property name="bindIp">0.0.0.0</property><property name="frontWriteQueueSize">4096</property> <property name="processors">32</property> --><!--分布式事务开关,0为不过滤分布式事务,1为过滤分布式事务(如果分布式事务内只涉及全局表,则不过滤),2为不过滤分布式事务,但是记录分布式事务日志--><property name="handleDistributedTransactions">0</property>
<!--off heap for merge/order/group/limit 1开启 0关闭--><property name="useOffHeapForMerge">1</property>
<!--单位为m--><property name="memoryPageSize">1m</property>
<!--单位为k--><property name="spillsFileBufferSize">1k</property>
<property name="useStreamOutput">0</property>
<!--单位为m--><property name="systemReserveMemorySize">384m</property>

<!--是否采用zookeeper协调切换 --><property name="useZKSwitch">true</property>

</system>
<!-- 全局SQL防火墙设置 --><!--<firewall> <whitehost> <host host="127.0.0.1" user="mycat"/> <host host="127.0.0.2" user="mycat"/> </whitehost> <blacklist check="false"> </blacklist></firewall>-->
<user name="root"><property name="password">password</property><property name="schemas">dustdetection</property>
<!-- 表级 DML 权限设置 --><!--<privileges check="false"><schema name="TESTDB" dml="0110" ><table name="tb01" dml="0000"></table><table name="tb02" dml="1111"></table></schema></privileges> --></user>
<!--<user name="user"><property name="password">user</property><property name="schemas">TESTDB</property><property name="readOnly">true</property></user>-->
</mycat:server>


schema.xml

<?xml version="1.0"?><!DOCTYPE mycat:schema SYSTEM "schema.dtd"><mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="dustdetection" checkSQLschema="false" sqlMaxLimit="100"><!-- auto sharding by id (long) --><table name="alarm_info" dataNode="dn1" />
<table name="alarm_rule" dataNode="dn1" /><table name="alarm_rule_name" dataNode="dn1" /><table name="company_info" dataNode="dn1" /><table name="company_user_info" dataNode="dn1" /><table name="databasechangelog" dataNode="dn1" /><table name="databasechangeloglock" dataNode="dn1" /><table name="equipment_factor" dataNode="dn1" /><table name="equipment_info" dataNode="dn1" /><table name="equipment_log" dataNode="dn1" /><table name="equipment_rule" dataNode="dn1" /><table name="factor_code" dataNode="dn1" /><table name="jhi_authority" dataNode="dn1" /><table name="jhi_persistent_audit_event" dataNode="dn1" /><table name="jhi_persistent_audit_evt_data" dataNode="dn1" /><table name="jhi_user" dataNode="dn1" /><table name="jhi_user_authority" dataNode="dn1" /><table name="jhi_user_copy1" dataNode="dn1" /><table name="maintenance_record" dataNode="dn1" /><table name="sys_dict" dataNode="dn1" /><table name="dust_detection" dataNode="dn1,dn2,dn3" rule="mod-long" /><!-- global table is auto cloned to all defined data nodes ,so can joinwith any table whose sharding node is in the same data node --><!-- random sharding using mod sharind rule -->
<!--<table name="hotnews" primaryKey="ID" autoIncrement="true" dataNode="dn1,dn2,dn3" rule="mod-long" />--><!-- <table name="dual" primaryKey="ID" dataNode="dnx,dnoracle2" type="global"needAddLimit="false"/> <table name="worker" primaryKey="ID" dataNode="jdbc_dn1,jdbc_dn2,jdbc_dn3"rule="mod-long" /> -->
<!-- <table name="oc_call" primaryKey="ID" dataNode="dn1$0-743" rule="latest-month-calldate"/> --></schema><!-- <dataNode name="dn1$0-743" dataHost="localhost1" database="db$0-743"/> --><dataNode name="dn1" dataHost="n470" database="dustdetection1" /><dataNode name="dn2" dataHost="n470" database="dustdetection2" /><dataNode name="dn3" dataHost="n470" database="dustdetection3" /><!--<dataNode name="dn4" dataHost="sequoiadb1" database="SAMPLE" /> <dataNode name="jdbc_dn1" dataHost="jdbchost" database="db1" /><dataNodename="jdbc_dn2" dataHost="jdbchost" database="db2" /><dataNode name="jdbc_dn3" dataHost="jdbchost" database="db3" /> --><dataHost name="n470" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"><heartbeat>select user()</heartbeat><!-- can have multi write hosts --><writeHost host="hostM1" url="n470:3306" user="root" password="password"><!-- can have multi read hosts --><readHost host="hostS2" url="n471:3306" user="root" password="password" /></writeHost><!---<writeHost host="hostS1" url="localhost:3316" user="root" password="password" />--><!-- <writeHost host="hostM2" url="localhost:3316" user="root" password="123456"/> --></dataHost><!--<dataHost name="sequoiadb1" maxCon="1000" minCon="1" balance="0" dbType="sequoiadb" dbDriver="jdbc"><heartbeat> </heartbeat> <writeHost host="hostM1" url="sequoiadb://1426587161.dbaas.sequoialab.net:11920/SAMPLE" user="jifeng" password="jifeng"></writeHost> </dataHost>
<dataHost name="oracle1" maxCon="1000" minCon="1" balance="0" writeType="0" dbType="oracle" dbDriver="jdbc"> <heartbeat>select 1 from dual</heartbeat><connectionInitSql>alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'</connectionInitSql><writeHost host="hostM1" url="jdbc:oracle:thin:@127.0.0.1:1521:nange" user="base" password="123456" > </writeHost> </dataHost>
<dataHost name="jdbchost" maxCon="1000" minCon="1" balance="0" writeType="0" dbType="mongodb" dbDriver="jdbc"><heartbeat>select user()</heartbeat><writeHost host="hostM" url="mongodb://192.168.0.99/test" user="admin" password="123456" ></writeHost> </dataHost>
<dataHost name="sparksql" maxCon="1000" minCon="1" balance="0" dbType="spark" dbDriver="jdbc"><heartbeat> </heartbeat> <writeHost host="hostM1" url="jdbc:hive2://feng01:10000" user="jifeng" password="jifeng"></writeHost> </dataHost> -->
<!-- <dataHost name="jdbchost" maxCon="1000" minCon="10" balance="0" dbType="mysql"dbDriver="jdbc"> <heartbeat>select user()</heartbeat> <writeHost host="hostM1"url="jdbc:mysql://localhost:3306" user="root" password="123456"> </writeHost></dataHost> --></mycat:schema>


rule.xml

<?xml version="1.0" encoding="UTF-8"?><!-- - - Licensed under the Apache License, Version 2.0 (the "License");- you may not use this file except in compliance with the License. - Youmay obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0- - Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, - WITHOUTWARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See theLicense for the specific language governing permissions and - limitationsunder the License. --><!DOCTYPE mycat:rule SYSTEM "rule.dtd"><mycat:rule xmlns:mycat="http://io.mycat/"><tableRule name="rule1"><rule><columns>id</columns><algorithm>func1</algorithm></rule></tableRule>
<tableRule name="rule2"><rule><columns>user_id</columns><algorithm>func1</algorithm></rule></tableRule>
<tableRule name="sharding-by-intfile"><rule><columns>sharding_id</columns><algorithm>hash-int</algorithm></rule></tableRule><tableRule name="auto-sharding-long"><rule><columns>id</columns><algorithm>rang-long</algorithm></rule></tableRule><tableRule name="mod-long"><rule><columns>id</columns><algorithm>mod-long</algorithm></rule></tableRule><tableRule name="sharding-by-murmur"><rule><columns>id</columns><algorithm>murmur</algorithm></rule></tableRule><tableRule name="crc32slot"><rule><columns>id</columns><algorithm>crc32slot</algorithm></rule></tableRule><tableRule name="sharding-by-month"><rule><columns>create_time</columns><algorithm>partbymonth</algorithm></rule></tableRule><tableRule name="latest-month-calldate"><rule><columns>calldate</columns><algorithm>latestMonth</algorithm></rule></tableRule>
<tableRule name="auto-sharding-rang-mod"><rule><columns>id</columns><algorithm>rang-mod</algorithm></rule></tableRule>
<tableRule name="jch"><rule><columns>id</columns><algorithm>jump-consistent-hash</algorithm></rule></tableRule>
<function name="murmur"class="io.mycat.route.function.PartitionByMurmurHash"><property name="seed">0</property><!-- 默认是0 --><property name="count">2</property><!-- 要分片的数据库节点数量,必须指定,否则没法分片 --><property name="virtualBucketTimes">160</property><!-- 一个实际的数据库节点被映射为这么多虚拟节点,默认是160倍,也就是虚拟节点数是物理节点数的160倍 --><!-- <property name="weightMapFile">weightMapFile</property> 节点的权重,没有指定权重的节点默认是1。以properties文件的格式填写,以从0开始到count-1的整数值也就是节点索引为key,以节点权重值为值。所有权重值必须是正整数,否则以1代替 --><!-- <property name="bucketMapPath">/etc/mycat/bucketMapPath</property>用于测试时观察各物理节点与虚拟节点的分布情况,如果指定了这个属性,会把虚拟节点的murmur hash值与物理节点的映射按行输出到这个文件,没有默认值,如果不指定,就不会输出任何东西 --></function>
<function name="crc32slot" class="io.mycat.route.function.PartitionByCRC32PreSlot"><property name="count">2</property><!-- 要分片的数据库节点数量,必须指定,否则没法分片 --></function><function name="hash-int"class="io.mycat.route.function.PartitionByFileMap"><property name="mapFile">partition-hash-int.txt</property></function><function name="rang-long"class="io.mycat.route.function.AutoPartitionByLong"><property name="mapFile">autopartition-long.txt</property></function><function name="mod-long" class="io.mycat.route.function.PartitionByMod"><!-- how many data nodes --><property name="count">3</property></function>
<function name="func1" class="io.mycat.route.function.PartitionByLong"><property name="partitionCount">8</property><property name="partitionLength">128</property></function><function name="latestMonth"class="io.mycat.route.function.LatestMonthPartion"><property name="splitOneDay">24</property></function><function name="partbymonth"class="io.mycat.route.function.PartitionByMonth"><property name="dateFormat">yyyy-MM-dd</property><property name="sBeginDate">2015-01-01</property></function>
<function name="rang-mod" class="io.mycat.route.function.PartitionByRangeMod"> <property name="mapFile">partition-range-mod.txt</property></function>
<function name="jump-consistent-hash" class="io.mycat.route.function.PartitionByJumpConsistentHash"><property name="totalBuckets">3</property></function></mycat:rule>


测试

使用navicat连接mycat

启动mycat ./mycat start 使用navicat12 连接mycat

点击测试,连接成功。

查询

这里提前插入了数据

以上是关于使用mycat做mysql的分库分表读写分离的主要内容,如果未能解决你的问题,请参考以下文章

docker安装mycat并实现mysql读写分离和分库分表

MyCat实现读写分离+分库分表+全局表

使用Mycat实现MySQL的分库分表读写分离主从切换

Mycat读写分离和分库分表配置

MyCat 读写分离 数据库分库分表 中间件 安装部署,及简单使用

Mycat 读写分离 数据库分库分表 中间件 安装部署,及简单使用