centos7如何离线部署dataease

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了centos7如何离线部署dataease相关的知识,希望对你有一定的参考价值。

在CentOS7上离线部署DataEase的步骤包括:1.准备安装文件;2.解压安装文件;3.运行安装程序;4.设置系统环境变量;5.运行使用须知;6.重启系统。 参考技术A 1、下载Dataease的安装程序,并将其放置在CentOS7服务器上。
2、解压安装程序,执行./setup.sh进行安装。
3、在控制台中输入用户名和密码,创建Dataease的账号。
4、安装数据库,选择数据库类型,输入数据库相关信息,然后进行安装。
5、安装Dataease Web服务,输入Web服务URL和端口信息。
6、安装完成后,在浏览器中输入URL,进入Dataease管理界面,开始使用Dataease服务。

OceanBase社区版3.1.0三节点离线部署

本文介绍如何离线部署OceanBase社区版。

环境信息:

作用

主机名

IP

OS

OB目录

端口

CPU

内存

磁盘

observer

oceanbase1

11.114.0.20

Centos 7.5

/data/observer

[2881,2882]

8C

16G

50G

observer

oceanbase2

11.114.0.5

Centos 7.5

/data/observer

[2881,2882]

8C

16G

50G

observer

oceanbase3

11.114.0.6

Centos 7.5

/data/observer

[2881,2882]

8C

16G

50G

obproxy

oceanbase1

11.114.0.20

Centos 7.5

/data/obproxy

[2883,2884]

8C

16G

50G

备注:内存要大一些,太小的话会导致observer启动失败或bootstrap失败

软件下载:

官方下载地址:​​https://www.oceanbase.com/softwareCenter/community​

OceanBase社区版3.1.0三节点离线部署_数据库

准备工作【root用户】

1、selinux关闭

#临时关闭
setenforce 0
getenforce
#开机不启动selinux,需重启生效。已临时关闭,本次不需要重启生效。
sed -i s/=enforcing/=disabled/g /etc/selinux/config
#查看配置已生效cat /etc/selinux/config

2、firewalld关闭

#关闭防火墙
systemctl stop firewalld
#开机不启动防火墙
systemctl disable firewalld
#查看防火墙状态
systemctl status firewalld

3、修改主机名

#修改主机名为oceanbase,当前已生效,退出会话,再登录后显示新主机名
hostnamectl set-hostname oceanbase1
hostnamectl set-hostname oceanbase2
hostnamectl set-hostname oceanbase3

4、配置主机名

#主机名解析添加主机信息
cat >> /etc/hosts << EOF
11.114.0.20 oceanbase1
11.114.0.5 oceanbase2
11.114.0.6 oceanbase3
EOF
#查看主机名信息cat /etc/hosts

5、配置内核参数

OceanBase ​​数据库​​是单进程软件,需要访问网络,需要打开多个文件以及开启很多 TCP 连接,所以需要修改内核参数和用户会话设置。

注意:OBProxy 软件如果独立服务器部署的话,也按这个要求初始化服务器环境。

#添加内容
cat >> /etc/sysctl.conf <<EOF
net.core.somaxconn = 2048
net.core.netdev_max_backlog = 10000
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

net.ipv4.ip_local_port_range = 3500 65535
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_slow_start_after_idle=0

vm.swappiness = 0
vm.min_free_kbytes = 2097152
vm.max_map_count=655360
fs.aio-max-nr=1048576
EOF
#配置生效
sysctl -p

6、修改会话变量设置

通过配置 ​limits.conf​ 限制修改会话限制。 OceanBase 数据库的进程涉及的限制包括线程最大栈空间大小(Stack)、最大文件句柄数(Open Files)和 core 文件大小 (Core File Size)。

#添加内容
cat >> /etc/security/limits.conf << EOF
* soft nofile 655360
* hard nofile 655360
* soft nproc 655360
* hard nproc 655360
* soft core unlimited
* hard core unlimited
* soft stack unlimited
* hard stack unlimited
EOF
#退出当前会话,重新登录,使配置生效。#检查open files当前值,应为655350,否则后续启动集群会报错
ulimit -n

7、时间同步服务 [可选]

#以ob1为主时钟
[root@oceanbase1 ~]# yum install -y ntp
[root@oceanbase1 ~]# vi /etc/ntp.conf
server 127.127.1.0 iburst
systemctl restart ntpd.service

[root@oceanbase2 ~]# vi /etc/ntp.conf
server 192.168.43.89
restrict 192.168.43.89 mask 255.255.240.0 nomodify notrap
systemctl restart ntpd.service
ntpdate -u 192.168.43.89

[root@oceanbase3 ~]# vi /etc/ntp.conf
server 192.168.52.183
restrict 192.168.52.183 mask 255.255.240.0 nomodify notrap
systemctl restart ntpd.service
ntpdate -u 192.168.52.183
#运行以下命令验证配置是否成功[官方文档中也有时钟源配置流程]:
[root@oceanbase2 ~]# ntpdate -u 192.168.43.89
6 Aug 20:40:00 ntpdate[5211]: adjust time server 192.168.43.89 offset -0.003421 sec
[root@oceanbase2 ~]# ntpstat
unsynchronised
polling server every 8 s
[root@oceanbase2 ~]# timedatectl
Local time: 五 2021-08-06 20:40:27 CST
Universal time: 五 2021-08-06 12:40:27 UTC
RTC time: 五 2021-08-06 12:40:24
Time zone: Asia/Shanghai (CST, +0800)
NTP enabled: yes
NTP synchronized: no
RTC in local TZ: no
DST active: n/a#说明 NTP 服务生效。

8、配置安装用户

建议安装部署在普通用户下,后面都以用户 ​admin​ 为例。注意:给用户 adminsudo 权限不是必须的,只是为了某些时候方便。您可以结合企业安全规范决定是否执行。

# 新增普通用户 admin
useradd admin
# 改用户密码
passwd admin
# 或下面命令指定密码,密码修改为自己的。
echo admin:adminPWD123 | chpasswd

# 如果sudo 不存在,就安装 sudo
yum install -y sudo

# 方法一:admin 加到用户组 wheel 里。
[root@obce00 ~]# usermod admin -G wheel
[root@obce00 ~]# id admin
uid=1000(admin) gid=1000(admin) groups=1000(admin),10(wheel)


# 方法二:admin 添加到 /etc/sudoers 文件中
[root@obce00 ~]# cat /etc/sudoers |grep wheel
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
# %wheel ALL=(ALL) NOPASSWD: ALL

vim /etc/sudoers
## Allow root to run any commands anywhere
admin ALL=(ALL) ALL

9、配置安装目录

#3台机器都执行以下配置
mkdir /data
chown admin.admin /data

10、配置 SSH 免密登录【admin】

#3台机器执行
ssh-keygen -t rsa
#复制key到oceanbase1机器,3台机器都执行
ssh-copy-id oceanbase1
#复制keygc另外两台机器
scp /home/admin/.ssh/authorized_keys oceanbase2:/home/admin/.ssh
scp /home/admin/.ssh/authorized_keys oceanbase3:/home/admin/.ssh
#测试ssh免密
ssh oceanbase1 date
ssh oceanbase2 date
ssh oceanbase3 date

11、准备yaml文件【admin用户】

​使用OBD快速部署,需要用到配置文件,官方提供的模板可用从下面链接复制​​​https://github.com/oceanbase/obdeploy/blob/master/example/distributed-with-obproxy-example.yaml​

vi /home/admin/ob_cluster.yaml

## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
oceanbase-ce:
servers:
- name: server1
# Please dont use hostname, only IP can be supported
ip: 11.114.0.20
- name: server2
ip: 11.114.0.5
- name: server3
ip: 11.114.0.6
global:
# Please set devname as the network adaptors name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ips network adaptors name is "eth0", please use "eth0"
devname: eth0
# if current hardwares memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.
memory_limit: 8G # The maximum running memory for an observer
# The reserved system memory. system_memory is reserved for general tenants. The default value is 30G.
system_memory: 2G
datafile_disk_percentage: 20 # The percentage of the data_dir space to the total disk space. This value takes effect only when datafile_size is 0. The default value is 90.
syslog_level: INFO # System log level. The default value is INFO.
enable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true.
enable_syslog_recycle: true # Enable auto system log recycling or not. The default value is false.
max_syslog_file_count: 4 # The maximum number of reserved log files before enabling auto recycling. The default value is 0.
# observer cluster name, consistent with obproxys cluster_name
appname: ob_cluster
# root_password: # root user password, can be empty
# proxyro_password: # proxyro user pasword, consistent with obproxys observer_sys_password, can be empty
# In this example , support multiple ob process in single node, so different process use different ports.
# If deploy ob cluster in multiple nodes, the port and path setting can be same.
server1:
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path: /data/observer
# The directory for data storage. The default value is $home_path/store.
# data_dir: /data
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
# redo_dir: /redo
zone: zone1
server2:
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path: /data/observer
# The directory for data storage. The default value is $home_path/store.
# data_dir: /data
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
# redo_dir: /redo
zone: zone2
server3:
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path: /data/observer
# The directory for data storage. The default value is $home_path/store.
# data_dir: /data
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
# redo_dir: /redo
zone: zone3
obproxy:
# Set dependent components for the component.
# When the associated configurations are not done, OBD will automatically get the these configurations from the dependent components.
depends:
- oceanbase-ce
servers:
- 11.114.0.20
global:
listen_port: 2883 # External port. The default value is 2883.
prometheus_listen_port: 2884 # The Prometheus port. The default value is 2884.
home_path: /data/obproxy
# oceanbase root server list
# format: ip:mysql_port;ip:mysql_port. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.
# rs_list: 192.168.1.2:2881;192.168.1.3:2881;192.168.1.4:2881
enable_cluster_checkout: false
# observer cluster name, consistent with oceanbase-ces appname. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.
# cluster_name: obcluster
skip_proxy_sys_private_check: true
# obproxy_sys_password: # obproxy sys user password, can be empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.
# observer_sys_password: # proxyro user pasword, consistent with oceanbase-ces proxyro_password, can be empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.

针对官方模板修改部分:

#Server部分
#修改为实际IP
ip: 11.114.0.20
ip: 11.114.0.5
ip: 11.114.0.6
#网卡名称改为实际名称
devname: eth0
#修改内存参数
memory_limit: 8G
system_memory: 2G
#修改集群名称
appname: ob_cluster
#修改安装路径,三个都改
home_path: /data/observer
#obproxy部分
servers:
- 11.114.0.20
home_path: /data/obproxy

​用上面的yaml文件可以步骤和启动集群,但由于资源配置问题无法完成后面unit、pool和tenant的测试,参考网上的配置就可以,详细配置拿来贴在下面。可以根据需求修改后使用。

## Only need to configure when remote login is required
# user:
# username: root
# password: 111111
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30oceanbase-ce:
oceanbase-ce:
servers:
- name: z1
# Please dont use hostname, only IP can be supported
ip: 192.168.43.89
- name: z2
ip: 192.168.43.233
- name: z3
ip: 192.168.43.223
global:
# Please set devname as the network adaptors name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ips network adaptors name is "eth0", please use "eth0"
devname: ens33
#创建分布式架构时需要根据自己的网卡名字更改
cluster_id: 1
# please set memory limit to a suitable value which is matching resource.
memory_limit: 8G
#内存设定 最小为8G,不需要做变动
system_memory: 4G
stack_size: 512K
cpu_count: 10
#CPU总线程数,不确定的就用lscpu查询一下,
cache_wash_threshold: 1G
__min_full_resource_pool_memory: 268435456
workers_per_cpu_quota: 8
#CPU工作线程数,根据实际情况设定值,言外之意就是cpu_count中你想拿出来多少给他工作用
schema_history_expire_time: 1d
# The value of net_thread_count had better be same as cpus core number.
net_thread_count: 4
major_freeze_duty_time: Disable
minor_freeze_times: 10
enable_separate_sys_clog: 0
enable_merge_by_turn: FALSE
datafile_disk_percentage: 20
syslog_level: INFO
enable_syslog_recycle: true
max_syslog_file_count: 4
# observer cluster name, consistent with obproxys cluster_name
appname: ob_cluster
root_password:
proxyro_password:
z1:
mysql_port: 2881
rpc_port: 2882
home_path: /data/observer
zone: zone1
z2:
mysql_port: 2881
rpc_port: 2882
home_path: /data/observer
zone: zone2
z3:
mysql_port: 2881
rpc_port: 2882
home_path: /data/observer
zone: zone3obproxy:
servers:
- 192.168.43.89
global:
listen_port: 2883
prometheus_listen_port: 2884
home_path: /data/obproxy
# oceanbase root server list
# format: ip:mysql_port,ip:mysql_port
rs_list: 192.168.43.89:2881;192.168.43.233:2881;192.168.43.223:2881
enable_cluster_checkout: false
# observer cluster name, consistent with oceanbase-ces appname
cluster_name: ob_cluster
obproxy_sys_password:
observer_sys_password:
#如上参数大家可在官网查询具体解释,在这找了两个比较重要的叙述了一下,若解释错误请大家以官网为准

以下步骤全部使用admin用户

离线安装OBD

上传下载的所有rpm包到oceanbase1的/home/admin/rpm目录下

sudo rpm -ivh ob-deploy-1.1.2-1.el7.x86_64.rpm

OceanBase社区版3.1.0三节点离线部署_数据库_02

本地OceanBase镜像

将OceanBase数离线软件包加入本地镜像

cd /home/admin/rpm
obd mirror clone *.rpm
#将obd远程获取安装的remote文件改个名,让obd执行时候不走外网的镜像,走本地的镜像
mv /home/admin/.obd/mirror/remote /home/admin/.obd/mirror/remotebak

使用OBD部署

deploy集群

obd cluster deploy ob_cluster -c /home/admin/ob_cluster.yaml

OceanBase社区版3.1.0三节点离线部署_数据库_03

如果ssh免密登录配置失败会报上图错误,修复ssh免密后重新执行

OceanBase社区版3.1.0三节点离线部署_数据库_04

deploy成功后启动集群

OceanBase社区版3.1.0三节点离线部署_OceanBase_05

如上图ob_cluster启动成功

初始化OceanBase集群失败的常见原因有【来自官方问答榜】:

1、机器间的时钟误差过大,可以利用ntpq、clockdiff等检查机器间的始终误差。

2、信息指定有无,比如zone名称有误,或者网卡名和ip地址没对上等。

3、其他问题,如硬件问题。

具体原因可以查看日志:

observer.log observer运行时的日志

rootserver.log observer上rootserver的日志

启动失败示例

由于一开始配置内存过大导致start ob_cluster失败

OceanBase社区版3.1.0三节点离线部署_数据库_06

还是由于内存配置过大,导致start observer成功,但cluster bootstrap失败

OceanBase社区版3.1.0三节点离线部署_OceanBase_07

上面两种情况可以destroy集群后再重新deploy,如果server已经启动的话,需要添加-f参数强制destroy

OceanBase社区版3.1.0三节点离线部署_OceanBase_08

检查安装后集群状态

obd cluster list,查看集群状态是running

OceanBase社区版3.1.0三节点离线部署_数据库_09

obd cluster display ob_cluster,查看集群中的observer和obproxy信息和状态

OceanBase社区版3.1.0三节点离线部署_OceanBase_10

检查 OceanBase 集群各个节点进程信息。

OceanBase 是单进程软件,进程名叫 observer ,可以用下面命令查看这个进程。

IPS="11.114.0.20 11.114.0.5 11.114.0.6"
for ob in $IPS; do echo $ob; ssh $ob "ps -ef | grep observer | grep -v grep "; done

OceanBase社区版3.1.0三节点离线部署_数据库_11

从进程里看,可执行文件是 /data/observer/bin/observer ,实际上它是个软链接。

OceanBase社区版3.1.0三节点离线部署_数据库_12

进程启动的时候,通过 -o 指定了很多参数,这些参数都是在前面 OBD 集群部署配置文件里指定的。

检查 OceanBase 集群各个节点监听状况。

IPS="11.114.0.20 11.114.0.5 11.114.0.6"
for ob in $IPS;do echo $ob; ssh $ob "netstat -ntlp"; done

OceanBase社区版3.1.0三节点离线部署_OceanBase_13

OceanBase社区版3.1.0三节点离线部署_OceanBase_14

OceanBase社区版3.1.0三节点离线部署_OceanBase_15

连接 OceanBase 集群的内部实例(sys)

传统的 mysql 客户端可以连接 OceanBase 社区版,前提是 mysql 的版本是 5.5/5.6/5.7 。OceanBase 也提供自己的客户端工具 obclient 需要安装使用。 跟传统 MySQL 不一样的地方是 OBPROXY 的连接端口是 2883 , 连接用户名是 :root@sys#集群名 , 码是前面 OBD 配置文件里指定的。前面配置文件中未指定密码,直接空密码即可进入。

mysql -h 11.114.0.11 -uroot@sys#ob_cluster -P2883 -p -c -A oceanbase

OceanBase社区版3.1.0三节点离线部署_OceanBase_16

MySQL [oceanbase]> select a.zone,concat(a.svr_ip,:,a.svr_port) observer, cpu_total, (cpu_total-cpu_assigned) cpu_free, round(mem_total/1024/1024/1024) mem_total_gb, round((mem_total-mem_assigned)/1024/1024/1024) mem_free_gb, usec_to_time(b.last_offline_time) last_offline_time, usec_to_time(b.start_service_time) start_service_time, b.status, usec_to_time(b.stop_time) stop_time, b.build_version 
from __all_virtual_server_stat a join __all_server b on (a.svr_ip=b.svr_ip and a.svr_port=b.svr_port)
order by a.zone, a.svr_ip
;

OceanBase社区版3.1.0三节点离线部署_数据库_17

在数据库列表里看到 oceanbase 这个数据库,就表示集群初始化成功。

obclient 安装和使用示例。

sudo rpm -ivh /home/admin/rpm/obclient-2.0.0-2.el8.x86_64.rpm /home/admin/rpm/libobclient-2.0.0-2.el8.x86_64.rpm

OceanBase社区版3.1.0三节点离线部署_数据库_18

obclient -h 11.114.0.20 -uroot@sys#ob_cluster -P2883 -p -c -A oceanbase

OceanBase社区版3.1.0三节点离线部署_OceanBase_19

到此,OceanBase社区版3节点已部署完成。

参考文章:

​https://www.cnblogs.com/pursuing-dreams/p/15137318.html​

​https://www.zhihu.com/column/c_1447223323843735552​

以上是关于centos7如何离线部署dataease的主要内容,如果未能解决你的问题,请参考以下文章

微服务架构 - CentOS7离线部署docker

内网福音如何离线部署Rancher

OceanBase社区版3.1.0三节点离线部署

Zabbix5.4基于CentOS7离线部署

otrs离线部署(centos7详细采坑记)

微服务架构 - 离线部署k8s平台并部署测试实例