从零开始搭建k8s集群——使用KubeSphere管理平台搭建一个高可用的分布式事务服务组件Seata
Posted 北溟溟
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了从零开始搭建k8s集群——使用KubeSphere管理平台搭建一个高可用的分布式事务服务组件Seata相关的知识,希望对你有一定的参考价值。
前言
Seata是一款开源的分布式事务解决方案组件,致力于提供高性能和简单易用的分布式事务服务。
Seata 将为用户提供了 AT、TCC、SAGA 和 XA 事务模式,为用户打造一站式的分布式解决方案。官方原话。官方文档地址:https://seata.io/zh-cn/docs/overview/what-is-seata.html。本节内容我们使用kubesphere平台构建一个k8s环境下高可用的分布式事务服务组件seata。高可用方案我们采用nacos实现Seata服务注册中心和配置中心,使用mysql实现Seata的db数据存储。所以我们要事先搭建好我们的nacos和mysql服务。关于nacos和mysql服务搭建可参考我的系列博客《从零开始搭建k8s集群》,这里不再赘述。话不多说,开始我们的实战内容。
正文
①创建Seata服务相关的数据库及相关表
- 关于kubesphere下mysql的搭建请参考我的博客
- Seata服务数据存储的数据库和相关表脚本如下
参考地址:https://github.com/seata/seata/blob/develop/script/server/db/mysql.sql
-- -------------------------------- The script used when storeMode is 'db' -------------------------------- -- the table to store GlobalSession data CREATE TABLE IF NOT EXISTS `global_table` ( `xid` VARCHAR(128) NOT NULL, `transaction_id` BIGINT, `status` TINYINT NOT NULL, `application_id` VARCHAR(32), `transaction_service_group` VARCHAR(32), `transaction_name` VARCHAR(128), `timeout` INT, `begin_time` BIGINT, `application_data` VARCHAR(2000), `gmt_create` DATETIME, `gmt_modified` DATETIME, PRIMARY KEY (`xid`), KEY `idx_status_gmt_modified` (`status` , `gmt_modified`), KEY `idx_transaction_id` (`transaction_id`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; -- the table to store BranchSession data CREATE TABLE IF NOT EXISTS `branch_table` ( `branch_id` BIGINT NOT NULL, `xid` VARCHAR(128) NOT NULL, `transaction_id` BIGINT, `resource_group_id` VARCHAR(32), `resource_id` VARCHAR(256), `branch_type` VARCHAR(8), `status` TINYINT, `client_id` VARCHAR(64), `application_data` VARCHAR(2000), `gmt_create` DATETIME(6), `gmt_modified` DATETIME(6), PRIMARY KEY (`branch_id`), KEY `idx_xid` (`xid`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; -- the table to store lock data CREATE TABLE IF NOT EXISTS `lock_table` ( `row_key` VARCHAR(128) NOT NULL, `xid` VARCHAR(128), `transaction_id` BIGINT, `branch_id` BIGINT NOT NULL, `resource_id` VARCHAR(256), `table_name` VARCHAR(32), `pk` VARCHAR(36), `status` TINYINT NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking', `gmt_create` DATETIME, `gmt_modified` DATETIME, PRIMARY KEY (`row_key`), KEY `idx_status` (`status`), KEY `idx_branch_id` (`branch_id`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; CREATE TABLE IF NOT EXISTS `distributed_lock` ( `lock_key` CHAR(20) NOT NULL, `lock_value` VARCHAR(20) NOT NULL, `expire` BIGINT, primary key (`lock_key`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('HandleAllSession', ' ', 0);
- 连接数据库,完成seata数据库及表创建
②创建nacos服务实现seata服务注册中心和配置中心的依赖
- nacos服务创建可参考本人往期博客,这里不再赘述
- 创建seata配置中心的命名空间SEATA_GROUP
- seata的完整原始配置文件模板如下
#For details about configuration items, see https://seata.io/zh-cn/docs/user/configurations.html #Transport configuration, for client and server transport.type=TCP transport.server=NIO transport.heartbeat=true transport.enableTmClientBatchSendRequest=false transport.enableRmClientBatchSendRequest=true transport.enableTcServerBatchSendResponse=false transport.rpcRmRequestTimeout=30000 transport.rpcTmRequestTimeout=30000 transport.rpcTcRequestTimeout=30000 transport.threadFactory.bossThreadPrefix=NettyBoss transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler transport.threadFactory.shareBossWorker=false transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector transport.threadFactory.clientSelectorThreadSize=1 transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread transport.threadFactory.bossThreadSize=1 transport.threadFactory.workerThreadSize=default transport.shutdown.wait=3 transport.serialization=seata transport.compressor=none #Transaction routing rules configuration, only for the client #事务分组名 service.vgroupMapping.default_tx_group=default #If you use a registry, you can ignore it service.default.grouplist=127.0.0.1:8091 service.enableDegrade=false service.disableGlobalTransaction=false #Transaction rule configuration, only for the client client.rm.asyncCommitBufferLimit=10000 client.rm.lock.retryInterval=10 client.rm.lock.retryTimes=30 client.rm.lock.retryPolicyBranchRollbackOnConflict=true client.rm.reportRetryCount=5 client.rm.tableMetaCheckEnable=false client.rm.tableMetaCheckerInterval=60000 client.rm.sqlParserType=druid client.rm.reportSuccessEnable=false client.rm.sagaBranchRegisterEnable=false client.rm.sagaJsonParser=fastjson client.rm.tccActionInterceptorOrder=-2147482648 client.rm.sqlParserType=druid client.tm.commitRetryCount=5 client.tm.rollbackRetryCount=5 client.tm.defaultGlobalTransactionTimeout=60000 client.tm.degradeCheck=false client.tm.degradeCheckAllowTimes=10 client.tm.degradeCheckPeriod=2000 client.tm.interceptorOrder=-2147482648 client.undo.dataValidation=true client.undo.logSerialization=jackson client.undo.onlyCareUpdateColumns=true server.undo.logSaveDays=7 server.undo.logDeletePeriod=86400000 client.undo.logTable=undo_log client.undo.compress.enable=true client.undo.compress.type=zip client.undo.compress.threshold=64k #For TCC transaction mode tcc.fence.logTableName=tcc_fence_log tcc.fence.cleanPeriod=1h #Log rule configuration, for client and server log.exceptionRate=100 #Transaction storage configuration, only for the server. The file, DB, and redis configuration values are optional. store.mode=file store.lock.mode=file store.session.mode=file #Used for password encryption store.publicKey= #If `store.mode,store.lock.mode,store.session.mode` are not equal to `file`, you can remove the configuration block. store.file.dir=file_store/data store.file.maxBranchSessionSize=16384 store.file.maxGlobalSessionSize=512 store.file.fileWriteBufferCacheSize=16384 store.file.flushDiskMode=async store.file.sessionReloadReadSize=100 #These configurations are required if the `store mode` is `db`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `db`, you can remove the configuration block. store.db.datasource=druid store.db.dbType=mysql store.db.driverClassName=com.mysql.jdbc.Driver store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true store.db.user=username store.db.password=password store.db.minConn=5 store.db.maxConn=30 store.db.globalTable=global_table store.db.branchTable=branch_table store.db.distributedLockTable=distributed_lock store.db.queryLimit=100 store.db.lockTable=lock_table store.db.maxWait=5000 #These configurations are required if the `store mode` is `redis`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `redis`, you can remove the configuration block. store.redis.mode=single store.redis.single.host=127.0.0.1 store.redis.single.port=6379 store.redis.sentinel.masterName= store.redis.sentinel.sentinelHosts= store.redis.maxConn=10 store.redis.minConn=1 store.redis.maxTotal=100 store.redis.database=0 store.redis.password= store.redis.queryLimit=100 #Transaction rule configuration, only for the server server.recovery.committingRetryPeriod=1000 server.recovery.asynCommittingRetryPeriod=1000 server.recovery.rollbackingRetryPeriod=1000 server.recovery.timeoutRetryPeriod=1000 server.maxCommitRetryTimeout=-1 server.maxRollbackRetryTimeout=-1 server.rollbackRetryTimeoutUnlockEnable=false server.distributedLockExpireTime=10000 server.session.branchAsyncQueueSize=5000 server.session.enableBranchAsyncRemove=true #Metrics configuration, only for the server metrics.enabled=false metrics.registryType=compact metrics.exporterList=prometheus metrics.exporterPrometheusPort=9898
- 需要修改seata的配置文件为数据库模式,修改后如下,标注的地方是需要修改的地方
#For details about configuration items, see https://seata.io/zh-cn/docs/user/configurations.html #Transport configuration, for client and server transport.type=TCP transport.server=NIO transport.heartbeat=true transport.enableTmClientBatchSendRequest=false transport.enableRmClientBatchSendRequest=true transport.enableTcServerBatchSendResponse=false transport.rpcRmRequestTimeout=30000 transport.rpcTmRequestTimeout=30000 transport.rpcTcRequestTimeout=30000 transport.threadFactory.bossThreadPrefix=NettyBoss transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler transport.threadFactory.shareBossWorker=false transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector transport.threadFactory.clientSelectorThreadSize=1 transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread transport.threadFactory.bossThreadSize=1 transport.threadFactory.workerThreadSize=default transport.shutdown.wait=3 transport.serialization=seata transport.compressor=none #Transaction routing rules configuration, only for the client #############################事务分组名######################### service.vgroupMapping.my_test_tx_group=default #If you use a registry, you can ignore it service.default.grouplist=127.0.0.1:8091 service.enableDegrade=false service.disableGlobalTransaction=false #Transaction rule configuration, only for the client client.rm.asyncCommitBufferLimit=10000 client.rm.lock.retryInterval=10 client.rm.lock.retryTimes=30 client.rm.lock.retryPolicyBranchRollbackOnConflict=true client.rm.reportRetryCount=5 client.rm.tableMetaCheckEnable=false client.rm.tableMetaCheckerInterval=60000 client.rm.sqlParserType=druid client.rm.reportSuccessEnable=false client.rm.sagaBranchRegisterEnable=false client.rm.sagaJsonParser=fastjson client.rm.tccActionInterceptorOrder=-2147482648 client.rm.sqlParserType=druid client.tm.commitRetryCount=5 client.tm.rollbackRetryCount=5 client.tm.defaultGlobalTransactionTimeout=60000 client.tm.degradeCheck=false client.tm.degradeCheckAllowTimes=10 client.tm.degradeCheckPeriod=2000 client.tm.interceptorOrder=-2147482648 client.undo.dataValidation=true client.undo.logSerialization=jackson client.undo.onlyCareUpdateColumns=true server.undo.logSaveDays=7 server.undo.logDeletePeriod=86400000 client.undo.logTable=undo_log client.undo.compress.enable=true client.undo.compress.type=zip client.undo.compress.threshold=64k #For TCC transaction mode tcc.fence.logTableName=tcc_fence_log tcc.fence.cleanPeriod=1h #Log rule configuration, for client and server log.exceptionRate=100 #############################改为db模式######################### #Transaction storage configuration, only for the server. The file, DB, and redis configuration values are optional. store.mode=db store.lock.mode=db store.session.mode=db #Used for password encryption store.publicKey= #If `store.mode,store.lock.mode,store.session.mode` are not equal to `file`, you can remove the configuration block. store.file.dir=file_store/data store.file.maxBranchSessionSize=16384 store.file.maxGlobalSessionSize=512 store.file.fileWriteBufferCacheSize=16384 store.file.flushDiskMode=async store.file.sessionReloadReadSize=100 #############################改为kubesphere中的mysql配置地址模式######################### #These configurations are required if the `store mode` is `db`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `db`, you can remove the configuration block. store.db.datasource=druid store.db.dbType=mysql store.db.driverClassName=com.mysql.jdbc.Driver store.db.url=jdbc:mysql://app-mysql.app:3306/seata?useUnicode=true&rewriteBatchedStatements=true store.db.user=root store.db.password=root store.db.minConn=5 store.db.maxConn=30 store.db.globalTable=global_table store.db.branchTable=branch_table store.db.distributedLockTable=distributed_lock store.db.queryLimit=100 store.db.lockTable=lock_table store.db.maxWait=5000 #These configurations are required if the `store mode` is `redis`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `redis`, you can remove the configuration block. store.redis.mode=single store.redis.single.host=127.0.0.1 store.redis.single.port=6379 store.redis.sentinel.masterName= store.redis.sentinel.sentinelHosts= store.redis.maxConn=10 store.redis.minConn=1 store.redis.maxTotal=100 store.redis.database=0 store.redis.password= store.redis.queryLimit=100 #Transaction rule configuration, only for the server server.recovery.committingRetryPeriod=1000 server.recovery.asynCommittingRetryPeriod=1000 server.recovery.rollbackingRetryPeriod=1000 server.recovery.timeoutRetryPeriod=1000 server.maxCommitRetryTimeout=-1 server.maxRollbackRetryTimeout=-1 server.rollbackRetryTimeoutUnlockEnable=false server.distributedLockExpireTime=10000 server.session.branchAsyncQueueSize=5000 server.session.enableBranchAsyncRemove=true #Metrics configuration, only for the server metrics.enabled=false metrics.registryType=compact metrics.exporterList=prometheus metrics.exporterPrometheusPort=9898
- 在nacos的SEATA_GROUP命名空间下创建seata的配置文件seata-server
- 修改的部分如下:
③创建seata服务
- 点击配置中心->配置->创建,填写seata基本配置信息
- 点击下一步,添加键值对
- seata原始配置文件registry.conf
registry # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa type = "file" nacos application = "seata-server" serverAddr = "127.0.0.1:8848" group = "SEATA_GROUP" namespace = "" cluster = "default" username = "" password = "" eureka serviceUrl = "http://localhost:8761/eureka" application = "default" weight = "1" redis serverAddr = "localhost:6379" db = 0 password = "" cluster = "default" timeout = 0 zk cluster = "default" serverAddr = "127.0.0.1:2181" sessionTimeout = 6000 connectTimeout = 2000 username = "" password = "" consul cluster = "default" serverAddr = "127.0.0.1:8500" aclToken = "" etcd3 cluster = "default" serverAddr = "http://localhost:2379" sofa serverAddr = "127.0.0.1:9603" application = "default" region = "DEFAULT_ZONE" datacenter = "DefaultDataCenter" cluster = "default" group = "SEATA_GROUP" addressWaitTime = "3000" file name = "file.conf" config # file、nacos 、apollo、zk、consul、etcd3 type = "file" nacos serverAddr = "127.0.0.1:8848" namespace = "" group = "SEATA_GROUP" username = "" password = "" dataId = "seataServer.properties" consul serverAddr = "127.0.0.1:8500" aclToken = "" apollo appId = "seata-server" ## apolloConfigService will cover apolloMeta apolloMeta = "http://192.168.1.204:8801" apolloConfigService = "http://192.168.1.204:8080" namespace = "application" apolloAccesskeySecret = "" cluster = "seata" zk serverAddr = "127.0.0.1:2181" sessionTimeout = 6000 connectTimeout = 2000 username = "" password = "" nodePath = "/seata/seata.properties" etcd3 serverAddr = "http://localhost:2379" file name = "file.conf"
- 修改seata配置文件为nacos模式
registry type = "nacos" nacos application = "seata-server" serverAddr = "app-nacos.app:8848" group = "SEATA_GROUP" namespace = "" cluster = "default" username = "" password = "" config type = "nacos" nacos serverAddr = "app-nacos.app:8848" namespace = "" group = "SEATA_GROUP" username = "" password = "" dataId = "seataServer.properties"
- 添加配置数据,点击对号保存
- 点击创建,完成seata服务的创建
- 点击应用->服务->创建一个有状态的seata服务
- 填写seata基本信息 ,点击下一步
- 选择容器副本数量为3,点击添加容器镜像seataio/seata-server:1.4.2
- 填写端口映射,勾选同步主机时区,点击对号添加容器,点击下一步
- 选择挂载配置文件和秘钥
- 配置seata配置文件
- 选择特定的键和路径,点击下一步
- 点击创建,完成seata服务的创建
- seata服务创建完成
④验证seata服务
- 进入容器查看seata服务是否启动正常
- 在nacos注册中心查看是否有seata服务,有三个健康实例数,说明seata已经安装完成
结语
至此,关于使用KubeSphere管理平台搭建一个高可用的分布式事务服务组件Seata到这里就结束了,别忘了点赞、关注、加收藏哦,我们下期见。。。
以上是关于从零开始搭建k8s集群——使用KubeSphere管理平台搭建一个高可用的分布式事务服务组件Seata的主要内容,如果未能解决你的问题,请参考以下文章
从零开始搭建k8s集群——使用KubeSphere管理平台搭建链路追踪组件zipkin服务端
(九)从零开始搭建k8s集群——使用KubeSphere管理平台搭建流控组件sentinel服务端
从零开始搭建k8s集群——使用KubeSphere管理平台搭建一个高可用的zookeeper集群服务
(十八)从零开始搭建k8s集群——使用KubeSphere管理平台搭建logstash服务