Linux Deploy Ubuntu 20.04 安装 mariadb
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Linux Deploy Ubuntu 20.04 安装 mariadb相关的知识,希望对你有一定的参考价值。
参考技术A 1.安装mariadb-server2.运行mysql_install_db
正常运行完成后可以看到如下:
To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system
PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !
To do so, start the server, then issue the following commands:
'/usr/bin/mysqladmin' -u root password 'new-password'
'/usr/bin/mysqladmin' -u root -h localhost password 'new-password'
Alternatively you can run:
'/usr/bin/mysql_secure_installation'
which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.
See the MariaDB Knowledgebase at http://mariadb.com/kb or the
MySQL manual for more instructions.
You can start the MariaDB daemon with:
cd '/usr' ; /usr/bin/mysqld_safe --datadir='/var/lib/mysql'
You can test the MariaDB daemon with mysql-test-run.pl
cd '/usr/mysql-test' ; perl mysql-test-run.pl
Please report any problems at http://mariadb.org/jira
The latest information about MariaDB is available at http://mariadb.org/.
You can find additional information about the MySQL part at:
http://dev.mysql.com
Consider joining MariaDB's strong and vibrant community:
https://mariadb.org/get-involved/
3. 给mysql用户配置权限
4. 启动mariadb
5. 运行mysql_secure_installation
6. 更改字符集
7. 最后查看字符集
完成。
Ubuntu系统用ceph-deploy部署ceph
ceph-deploy部署ceph
- ceph的组件和功能:
ceph的核心组件包括:OSD、monitor和mds三大组件
OSD:OSD的英文全称是Object Storage Device,它的主要功能是存储数据、复制数据、平衡数据、恢复数据等,与其它OSD间进行心跳检查等,并将一些变化情况上报给Ceph Monitor。一般情况下一块硬盘对应一个OSD,由OSD来对硬盘存储进行管理。
Monitor:由该英文名字我们可以知道它是一个监视器,负责监视Ceph集群,维护Ceph集群的健康状态,同时维护着Ceph集群中的各种Map图,比如OSD Map、Monitor Map、PG Map和CRUSH Map,这些Map统称为Cluster Map,Cluster Map是RADOS的关键数据结构,管理集群中的所有成员、关系、属性等信息以及数据的分发,比如当用户需要存储数据到Ceph集群时,OSD需要先通过Monitor获取最新的Map图,然后根据Map图和object id等计算出数据最终存储的位置。
MDS:全称是Ceph MetaData Server,主要保存的文件系统服务的元数据,但对象存储和块存储设备是不需要使用该服务的。
- ceph的数据读写流程
存入的数据先找到要存的pool,把数据分为若干PG,PG默认大小为4M,每个PG生成三个副本(一主两备),放到不同节点的不同osd上。
- 使用ceph-deploy部署ceph集群
3.1 部署节点信息
节点名称 |
pubilc network |
cluster network |
安装系统 |
部署组件 |
ceph-deploy |
192.168.202.141 |
192.168.122.141 |
ubuntu 18.04 |
ceph-deploy |
ubuntu-ceph1 |
192.168.202.142 |
192.168.122.142 |
ubuntu 18.04 |
monitor,mgr,rgw,mds,osd |
ubuntu-ceph2 |
192.168.202.143 |
192.168.122.143 |
ubuntu 18.04 |
monitor,rgw,mds,osd |
ubuntu-ceph3 |
192.168.202.144 |
192.168.122.144 |
ubuntu 18.04 |
monitor,rgw,mds,osd |
ceph版本:ceph version 16.2.5 pacific
deploy版本:ceph-deploy 2.0.1
Ceph网络:Ceph Cluster网络均采用192.168.122.0/24,Ceph Public网络采用192.168.202.0/24
3.2部署ubuntu系统,准备环境(所有节点都执行)
配置IP
root@ubuntu:~# cat /etc/netplan/01-netcfg.yaml
# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: no
dhcp6: no
addresses: [192.168.202.141/24]
gateway4: 192.168.202.1
nameservers:
addresses: [114.114.114.114]
eth1:
dhcp4: no
dhcp6: no
addresses: [192.168.122.141/24]
netplan apply
关闭防火墙
ufw status && ufw disable
设置root远程登录
sed -i \'s/#PermitRootLogin prohibit-password/PermitRootLogin yes/g\' /etc/ssh/sshd_config
sed -i \'s/#PasswordAuthentication yes/PasswordAuthentication yes/g\' /etc/ssh/sshd_config
systemctl restart sshd
passwd
配置主机名、hosts文件
hostnamectl set-hostname ceph-deploy
cat << EOF >> /etc/hosts
192.168.202.141 ceph-deploy
192.168.202.142 ubuntu-ceph1
192.168.202.143 ubuntu-ceph2
192.168.202.144 ubuntu-ceph3
EOF
更换源
参考网站 https://www.linuxidc.com/Linux/2018-08/153709.htm
cp /etc/apt/sources.list /etc/apt/sources.list.bak
cat << EOF > /etc/apt/sources.list
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-proposed main restricted universe multiverse
EOF
apt update
安装基础包
apt install iproute2 ntpdate tcpdump telnet traceroute nfs-kernel-server nfs-common lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute gcc openssh-server lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute iotop unzip zip
时间同步
apt install chrony -y
systemctl status chronyd
chronyc sources -v
添加ceph源
wget -q -O- \'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc\' | sudo apt-key add -
echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic main" >> /etc/apt/sources.list
apt-get update
创建部署用户及配置sudo权限
a.考虑到使用root用户的安全性问题,所以这里创建一个 ceph-admin 普通用户做为部署及运维使用
b.再加上ceph-deploy会在节点安装软件包,所以创建的用户需要无密码 sudo 权限
groupadd -r -g 2022 ceph-admin && useradd -r -m -s /bin/bash -u 2022 -g 2022 ceph-admin && echo ceph-admin:123456 | chpasswd
root@ceph-deploy:~# echo "ceph-admin ALL = NOPASSWD:ALL" | tee /etc/sudoers.d/ceph-admin
ceph-admin ALL = NOPASSWD:ALL
root@ceph-deploy:~# chmod 0440 /etc/sudoers.d/ceph-admin
root@ceph-deploy:~# ll /etc/sudoers.d/ceph-admin
-r--r----- 1 root root 30 Aug 22 03:01 /etc/sudoers.d/ceph-admin
设置免密登录(只需要在ceph-deploy节点上执行)
登录ceph-admin
ceph-admin@ceph-deploy:~$ ssh-keygen -t rsa
ceph-admin@ceph-deploy:~$ ssh-copy-id ceph-admin@ubuntu-ceph1
ceph-admin@ceph-deploy:~$ ssh-copy-id ceph-admin@ubuntu-ceph2
ceph-admin@ceph-deploy:~$ ssh-copy-id ceph-admin@ubuntu-ceph3
3.3部署
安装ceph部署工具
ceph-admin@ceph-deploy:~$ apt-cache madison ceph-deploy
ceph-admin@ceph-deploy:~$ sudo apt install ceph-deploy -y
Ubuntu 各服务器需要单独安装 Python2:
apt install python2.7 -y && ln -sv /usr/bin/python2.7 /usr/bin/python2
初始化mon节点
ceph-admin@ceph-deploy:~$ mkdir ceph-cluster #保存集群的初始化配置信息
ceph-admin@ceph-deploy:~$ cd ceph-cluster/
注意:ceph-deploy部署程序会将文件输出到当前目录。执行ceph-deploy部署时请确保位于ceph-deploy目录中。
ceph-admin@ceph-deploy:~/ceph-cluster$ ceph-deploy new --cluster-network 192.168.122.0/24 --public-network 192.168.202.0/24 ubuntu-ceph1 ceph-admin@ceph-deploy:~/ceph-cluster$ ll total 20 drwxrwxr-x 2 ceph-admin ceph-admin 4096 Aug 22 03:30 ./ drwxr-xr-x 4 ceph-admin ceph-admin 4096 Aug 22 03:30 ../ -rw-rw-r-- 1 ceph-admin ceph-admin 274 Aug 22 03:30 ceph.conf -rw-rw-r-- 1 ceph-admin ceph-admin 3404 Aug 22 03:30 ceph-deploy-ceph.log -rw------- 1 ceph-admin ceph-admin 73 Aug 22 03:30 ceph.mon.keyring ceph-admin@ceph-deploy:~/ceph-cluster$ cat ceph.conf [global] fsid = ab782bbc-84bb-4815-8146-914012677b03 public_network = 192.168.202.0/24 cluster_network = 192.168.122.0/24 mon_initial_members = ubuntu-ceph1 mon_host = 192.168.202.142 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx
初始化 node 节点过程
安装Ceph包到指定节点,参数–no-adjust-repos是直接使用本地源,不生成官方源。
ceph-admin@ceph-deploy:~/ceph-cluster$ ceph-deploy install --no-adjust-repos --nogpgcheck ubuntu-ceph1 ubuntu-ceph2 ubuntu-ceph3
此过程会在指定的 ceph node 节点按照串行的方式逐个服务器安装 epel 源和 ceph 源并按 安装 ceph ceph-radosgw
配置 mon 节点并生成及同步秘钥:
在各 mon 节点按照组件 ceph-mon,并通初始化 mon 节点,mon 节点 ha 还可以后期横向扩容。
安装:
root@ubuntu-ceph1:~# apt install ceph-mon
root@ubuntu-ceph2:~# apt install ceph-mon
root@ubuntu-ceph3:~# apt install ceph-mon
生成密钥:
ceph-admin@ceph-deploy:~/ceph-cluster$ ceph-deploy mon create-initial
验证 mon 节点:
验证在 mon 定节点已经自动安装并启动了 ceph-mon 服务,并且后期在 ceph-deploy 节点初 始化目录会生成一些 bootstrap ceph mds/mgr/osd/rgw 等服务的 keyring 认证文件,这些初始 化文件拥有对 ceph 集群的最高权限,所以一定要保存好。
root@ubuntu-ceph1:~# ps -ef | grep ceph-mon ceph 13606 1 0 05:43 ? 00:00:02 /usr/bin/ceph-mon -f --cluster ceph --id ubuntu-ceph1 --setuser ceph --setgroup ceph root 13910 13340 0 05:53 pts/1 00:00:00 grep --color=auto ceph-mon
分发 admin 秘钥
如果在 ceph-deploy 节点管理集群:
root@ceph-deploy:~# apt install ceph-common -y #先安装 ceph 的公共组件
root@ubuntu-ceph1:~# apt install ceph-common -y
root@ubuntu-ceph2:~# apt install ceph-common -y
root@ubuntu-ceph3:~# apt install ceph-common -y
ceph-admin@ceph-deploy:~/ceph-cluster$ ceph-deploy admin ubuntu-ceph1 ubuntu-ceph2 ubuntu-ceph3
节点查看
命令查看
root@ubuntu-ceph1:~# ceph -s cluster: id: c84ff7b7-f8d1-44ac-a87e-e4048e3bebc5 health: HEALTH_WARN mons are allowing insecure global_id reclaim #需要禁用非安全模式通信 services: mon: 3 daemons, quorum ubuntu-ceph1,ubuntu-ceph2,ubuntu-ceph3 (age 22m) mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: root@ubuntu-ceph1:~# ceph config set mon auth_allow_insecure_global_id_reclaim false root@ubuntu-ceph1:~# ceph -s cluster: id: c84ff7b7-f8d1-44ac-a87e-e4048e3bebc5 health: HEALTH_OK services: mon: 3 daemons, quorum ubuntu-ceph1,ubuntu-ceph2,ubuntu-ceph3 (age 24m) mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
部署 ceph-mgr 节点
初始化 ceph-mgr 节点:
root@ubuntu-ceph1:~# apt install ceph-mgr
ceph-admin@ceph-deploy:~/ceph-cluster$ ceph-deploy mgr create ubuntu-ceph1
ceph-deploy 管理 ceph 集群:
在 ceph-deploy 节点配置一下系统环境,以方便后期可以执行 ceph 管理命令。
root@ceph-deploy:~# apt install ceph-common
ceph-admin@ceph-deploy:~/ceph-cluster$ ceph-deploy admin ceph-deploy #推送正证书给自己
ceph命令授权给ceph-admin用户
root@ceph-deploy:~# setfacl -m u:ceph-admin:rw /etc/ceph/ceph.client.admin.keyring
准备 OSD 节点
OSD 节点安装运行环境:
#在添加 osd 之前,对node节点安装基本环境:
ceph-admin@ceph-deploy:~/ceph-cluster$ ceph-deploy install --no-adjust-repos --nogpgcheck ubuntu-ceph1 ubuntu-ceph2 ubuntu-ceph3
擦除磁盘:
ceph-deploy disk zap ubuntu-ceph1 /dev/sdb
ceph-deploy disk zap ubuntu-ceph1 /dev/sdc
ceph-deploy disk zap ubuntu-ceph1 /dev/sdd
ceph-deploy disk zap ubuntu-ceph1 /dev/sde
ceph-deploy disk zap ubuntu-ceph2 /dev/sdb
ceph-deploy disk zap ubuntu-ceph2 /dev/sdc
ceph-deploy disk zap ubuntu-ceph2 /dev/sdd
ceph-deploy disk zap ubuntu-ceph2 /dev/sde
ceph-deploy disk zap ubuntu-ceph3 /dev/sdb
ceph-deploy disk zap ubuntu-ceph3 /dev/sdc
ceph-deploy disk zap ubuntu-ceph3 /dev/sdd
ceph-deploy disk zap ubuntu-ceph3 /dev/sde
添加主机的磁盘osd
数据分类保存方式:
Data:即 ceph 保存的对象数据
Block: rocks DB 数据即元数据
block-wal:数据库的 wal 日志
osd的id从0开始顺序使用 0-3
ceph-deploy osd create ubuntu-ceph1 --data /dev/sdb
ceph-deploy osd create ubuntu-ceph1 --data /dev/sdc
ceph-deploy osd create ubuntu-ceph1 --data /dev/sdd
ceph-deploy osd create ubuntu-ceph1 --data /dev/sde
4-7
ceph-deploy osd create ubuntu-ceph2 --data /dev/sdb
ceph-deploy osd create ubuntu-ceph2 --data /dev/sdc
ceph-deploy osd create ubuntu-ceph2 --data /dev/sdd
ceph-deploy osd create ubuntu-ceph2 --data /dev/sde
8-11
ceph-deploy osd create ubuntu-ceph3 --data /dev/sdb
ceph-deploy osd create ubuntu-ceph3 --data /dev/sdc
ceph-deploy osd create ubuntu-ceph3 --data /dev/sdd
ceph-deploy osd create ubuntu-ceph3 --data /dev/sde
- 测试上传与下载数据
创建pool
存取数据时,客户端必须首先连接至 RADOS 集群上某存储池,然后根据对象名称由相关的
CRUSH 规则完成数据对象寻址。于是,为了测试集群的数据存取功能,这里首先创建一个用
于测试的存储池 mytest,并设定其 PG 数量为 32 个。
ceph-admin@ceph-deploy:~$ ceph osd pool create mytest 32 32 pool \'mytest\' created ceph-admin@ceph-deploy:~$ ceph osd pool ls device_health_metrics mytest ceph-admin@ceph-deploy:~$ ceph pg ls-by-pool mytest | awk \'{print $1,$2,$15}\' #验证 PG 与 PGP 组合 PG OBJECTS ACTING 2.0 0 [7,10,3]p7 2.1 0 [2,8,6]p2 2.2 0 [5,1,9]p5 2.3 0 [5,2,9]p5 2.4 0 [1,10,6]p1 2.5 0 [8,0,4]p8 2.6 0 [1,8,4]p1 2.7 0 [6,10,2]p6 2.8 0 [7,9,0]p7 2.9 0 [1,7,10]p1 2.a 0 [11,3,7]p11 2.b 0 [8,7,2]p8 2.c 0 [11,0,5]p11 2.d 0 [9,4,3]p9 2.e 0 [2,9,7]p2 2.f 0 [8,4,0]p8 2.10 0 [8,1,5]p8 2.11 0 [6,1,9]p6 2.12 0 [10,3,7]p10 2.13 0 [9,4,3]p9 2.14 0 [6,9,0]p6 2.15 0 [9,1,5]p9 2.16 0 [5,11,1]p5 2.17 0 [6,10,2]p6 2.18 0 [9,4,2]p9 2.19 0 [3,6,8]p3 2.1a 0 [6,8,2]p6 2.1b 0 [11,7,3]p11 2.1c 0 [10,7,1]p10 2.1d 0 [10,7,0]p10 2.1e 0 [3,10,5]p3 2.1f 0 [0,7,8]p0 * NOTE: afterwards
上传文件
ceph-admin@ceph-deploy:~$ sudo rados put msg1 /var/log/syslog --pool=mytest # 把 messages 文件上传到 mytest并指定对象 id 为 msg1
列出文件
ceph-admin@ceph-deploy:~$ sudo rados ls --pool=mytest msg1
文件信息
ceph-admin@ceph-deploy:~$ ceph osd map mytest msg1 osdmap e129 pool \'mytest\' (2) object \'msg1\' -> pg 2.c833d430 (2.10) -> up ([8,1,5], p8) acting ([8,1,5], p8)
下载文件
ceph-admin@ceph-deploy:~$ sudo rados get msg1 --pool=mytest /tmp/my.txt
修改文件
ceph-admin@ceph-deploy:~$ sudo rados put msg1 /etc/passwd --pool=mytest
ceph-admin@ceph-deploy:~$ sudo rados get msg1 --pool=mytest /tmp/2.txt
删除文件
ceph-admin@ceph-deploy:~$ sudo rados rm msg1 --pool=mytest
ceph-admin@ceph-deploy:~$ sudo rados ls --pool=mytest
以上是关于Linux Deploy Ubuntu 20.04 安装 mariadb的主要内容,如果未能解决你的问题,请参考以下文章
无法在 linux ubuntu 20.04 中安装 libgraph
ubuntu/linux系统知识虚拟机安装ubuntu18.04/20.04