基于麒麟SP10服务器版的Kubernetes集群安装
Posted 正月十六工作室
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了基于麒麟SP10服务器版的Kubernetes集群安装相关的知识,希望对你有一定的参考价值。
Kubernetes集群安装
1、规划
2、系统初始化
(1)为master、node1和node2节点设置主机名。
[root@master ~]# hostnamectl set-hostname master
[root@master ~]# bash
[root@ node1 ~]# hostnamectl set-hostname node1
[root@ node1 ~]# bash
[root@ node2 ~]# hostnamectl set-hostname node2
[root@ node2 ~]# bash
(2)为master、node1和node2节点配置IP地址。
[root@master ~]# nmcli connection modify ens33 ipv4.addresses 192.168.82.10/24 ipv4.gateway 192.168.82.254 ipv4.dns 192.168.82.254,114.114.114.114 ipv4.method manual autoconnect yes
[root@master ~]# nmcli connection up ens33
[root@node1 ~]# nmcli connection modify ens33 ipv4.addresses 192.168.82.20/24 ipv4.gateway 192.168.82.254 ipv4.dns 192.168.82.254,114.114.114.114 ipv4.method manual autoconnect yes
[root@node1 ~]# nmcli connection up ens33
[root@node2 ~]# nmcli connection modify ens33 ipv4.addresses 192.168.82.21/24 ipv4.gateway 192.168.82.254 ipv4.dns 192.168.82.254,114.114.114.114 ipv4.method manual autoconnect yes
[root@node2 ~]# nmcli connection up ens33
(3)使用“vim”命令,修改master节点的本地域名解析文件【/etc/hosts】,在文件内添加三个节点的域名解析,修改完成后,使用“scp”命令将此文件复制到node1和node2节点。
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.82.10 master //此处为需要添加的内容
192.168.82.20 node1
192.168.82.21 node2
[root@master ~]# for host in node1 node2;do scp /etc/hosts root@$host:/etc/hosts;done
Authorized users only. All activities may be monitored and reported.
hosts 100% 220 137.1KB/s 00:00
Authorized users only. All activities may be monitored and reported.
hosts 100% 220 200.8KB/s 00:00
(4)验证三个节点之间的连通性,在三个节点内执行“ping master”命令,正常返回结果则代表配置无误。
[root@master ~]# ping master -c 1
PING master (192.168.82.10) 56(84) bytes of data.
64 bytes from master (192.168.82.10): icmp_seq=1 ttl=64 time=0.048 ms
[root@node1 ~]# ping master -c 1
PING master (192.168.82.10) 56(84) bytes of data.
64 bytes from master (192.168.82.10): icmp_seq=1 ttl=64 time=1.31 ms
[root@node2 ~]# ping master -c 1
PING master (192.168.82.10) 56(84) bytes of data.
64 bytes from master (192.168.82.10): icmp_seq=1 ttl=64 time=1.01 ms
(5)由于后续需要上传文件或镜像到node节点,为了方便操作,需要为master节点配置免密登录。
[root@master ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:kYn4oSZXOlbNaEeimOpqYuARrW5JIiQUxRLab0xFO+8 root@ms-student
The key's randomart image is:
+---[RSA 2048]----+
| o=. .+ . |
|.+ + + O o |
|o * + X B |
|.+ = B = . |
|+ + @ . S |
|=+ * . . |
|B.o E |
|oB |
|* |
+----[SHA256]-----+
[root@master ~]# for host in node1 node2;do ssh-copy-id $host;done
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
Authorized users only. All activities may be monitored and reported.
root@node1's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'node1'"
and check to make sure that only the key(s) you wanted were added.
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Authorized users only. All activities may be monitored and reported.
root@node2's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'node2'"
and check to make sure that only the key(s) you wanted were added.
(6)在master、node1和node2节点,使用“yum”命令安装Kubernetes所需的依赖包和常用工具。(此处操作以master节点为例)
[root@master ~]# yum install -y conntrack ntpdate ntp ipvsadm ipset iptables curl sysstat libseccomp wget vim net-tools git bash-completion iptables-services lrzsz
[root@master ~]# source /etc/profile
(7)在master节点配置时钟同步(chronyd)服务,同步的云端时间服务器地址为【ntp1.aliyun.com】和【ntp2.aliyun.com】,允许时间同步的网段为【192.168.82.0/24】,并规定即使同步的网络时间服务器不可用,也允许将本地时间作为标准时间同步给其他客户端。
[root@master ~]# vim /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#pool pool.ntp.org iburst
#server ntp.ntsc.ac.cn iburst
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
#server cn.pool.ntp.org iburst
# Allow NTP client access from local network.
allow 192.168.82.0/24
# Serve time even if not synchronized to a time source.
local stratum 10
(8)配置完成后,重启chronyd服务,并将此服务设为开机自启。
[root@master ~]# systemctl restart chronyd
[root@master ~]# systemctl enable chronyd
(9)在node1和node2节点中配置chronyd服务,设置同步的时间服务器地址为【192.168.82.10】,修改完成重启chronyd服务,并设置服务为开机自启。(此处以node1节点为例)
[root@node1 ~]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#pool pool.ntp.org iburst
server 192.168.82.10 iburst
#server cn.pool.ntp.org iburst
[root@node1 ~]# systemctl restart chronyd
[root@node1 ~]# systemctl enable chronyd
(10)使用“chronyc”相关命令验证,node节点本地时间是否正常同步。(此处以node1节点为例)
[root@node1 ~]# chronyc sources -v
210 Number of sources = 1
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \\ | | zzzz = estimated error.
|| | | \\
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* master 3 6 17 40 +14us[ +139us] +/- 9638us
(11)在master、node1和node2节点中关闭【firewalld】服务,并设置该服务停止开机自启。(此处以master节点为例)
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
(12)在master、node1和node2节点中启用【iptables】服务,清空【iptables】服务自带规则,将该服务设置为开机自启。(此处以master节点为例)
[root@master ~]# systemctl start iptables
[root@master ~]# systemctl disable iptables
[root@ master ~]# iptables -F
[root@master ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
(13)在master、node1和node2节点中临时关闭【selinux】服务,并修改服务对应的配置文件【/etc/selinux/config】,修改selinux选项的值为disabled。(此处以master节点为例)
[root@master ~]# setenforce 0
setenforce: SELinux is disabled
[root@master ~]# getenforce
Disabled
[root@master ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
##请读者注意,如果使用“sed”命令进行修改,不能修改路径为【/etc/sysconfig/selinux】的文件,因为“sed”命令会破坏软链接。
(14)在master、node1和node2节点中禁用交换分区,并修改对应的全局挂载配置文件【/etc/fstab】,注释交换分区的条目。(此处以master节点为例)
[root@master ~]# swapoff -a
[root@master ~]# sed -i '/ swap / s/^\\(.*\\)$/#\\1/g' /etc/fstab
(15)在master、node1和node2节点中调整内核参数。使用“vim”命令在【~】目录下创建【kubernetes.conf】文件,写入对应的参数,并把文件复制到【/etc/sysctl.d】目录下。(此处以master节点为例)
[root@master ~]# vim kubernetes.conf
net.bridge.bridge-nf-call-iptables=1 #将内核网桥的数据包转发至iptables处理
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1 #开启路由转发
net.ipv4.tcp_tw_recycle=0 #关闭TIME_WAIT套接字的快速回收
net.netfilter.nf_conntrack_max=2310720 #修改防火墙表大小,默认65536
vm.swappiness=0 #禁用交换空间
vm.overcommit_memory=1 #内核允许分配所有的物理内存
vm.panic_on_oom=0 #内存不足时,启用OOM
fs.inotify.max_user_instances=8192 #修改用户创建实例的最大值
fs.inotify.max_user_watches=1048576 #修改用户可监控目录的最大值
fs.file-max=52706963 #调整系统文件描述符限制
fs.nr_open=52706963 #调整进程可分配最大文件句柄数
[root@master ~]# cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
(16)在master、node1和node2节点中设置【rsyslodg】服务和【systemd-journald】服务相结合,实现日志持久化存储的功能。(此处以master节点为例)
[root@master ~]# mkdir /var/log/journal
[root@master ~]# mkdir /etc/systemd/journald.conf.d
[root@master ~]# cd /etc/systemd/journald.conf.d
[root@master journald.conf.d]# vim Jan16.conf
[Journal]
Storage=persistent
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
SystemMaxUse=10G
SystemMaxFileSize=200M
MaxRetentionSec=2week
ForwardToSyslog=no
(17)完成配置后,使用“systemctl”命令重启【systemd-journald】服务,并查看服务状态。(此处以master节点为例)
[root@master ~]# systemctl status systemd-journald
[root@master ~]# systemctl status systemd-journald
● systemd-journald.service - Journal Service
Loaded: loaded (/usr/lib/systemd/system/systemd-journald.service; static; vendor preset: disabled)
Active: active (running) since Sun 2022-04-10 19:13:33 CST; 5h 4min ago
Docs: man:systemd-journald.service(8)
man:journald.conf(5)
Main PID: 692 (systemd-journal)
Status: "Processing requests..."
Tasks: 1
Memory: 19.5M
CGroup: /system.slice/systemd-journald.service
└─692 /usr/lib/systemd/systemd-journald
Apr 10 19:13:33 master systemd-journald[692]: Journal started
Apr 10 19:13:33 master systemd-journald[692]: Runtime Journal (/run/log/journal/8d41f69e06c5446e9b9f5d2c3c5403e0) is 8.0M, max 144.4M, 136.4M free.
Apr 10 19:13:33 master systemd[1]: systemd-journald.service: Succeeded.
Apr 10 19:13:34 master systemd-journald[692]: Time spent on flushing to /var is 143.209ms for 1789 entries.
Apr 10 19:13:34 master systemd-journald[692]: System Journal (/var/log/journal/8d41f69e06c5446e9b9f5d2c3c5403e0) is 32.0M, max 10.0G, 9.9G free.
(18)在master、node1和node2节点中使用“modprobe”命令向内核中添加kubernetes(kube-proxy)所需模块,创建和修改对应配置文件【/etc/sysconfig/modules/ipvs.modules】,使系统启动后仍会加载所需模块。(此处以master节点为例)
[root@master ~]# modprobe br_netfilter
[root@master ~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
[root@master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules
[root@master ~]# bash /etc/sysconfig/modules/ipvs.modules
(19)配置文件执行完成后,在master、node1和node2节点中使用“lsmod”命令查看已载入系统的模块。(此处以master节点为例)
[root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 176128 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_netlink 49152 0
nfnetlink 16384 3 nf_conntrack_netlink,nf_tables
nf_conntrack 163840 7 xt_conntrack,nf_nat,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,nf_conntrack_netlink,ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs
3、DOCKER安装
(1)在master、node1和node2节点中使用“yum”命令安装DOCKER所需依赖包。(此处以master节点为例)
[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
(2)在master、node1和node2节点中使用“vim”命令添加DOCKER所需仓库源。(此处以master节点为例)
[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# vim docker.repo
[Kylin-base]
name=Kylin-base
baseurl=https://mirrors.163.com/centos/7/os/$basearch
enabled=1
gpgcheck=0
[Kylin-exteas]
name=Kylin-exteas
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/7/extras/$basearch
enabled=1
gpgcheck=0
[docker]
name=docker
baseurl=https://mirrors.163.com/docker-ce/linux/centos/7Server/x86_64/stable/
enabled=1
gpgcheck=0
(3)仓库源添加完成后,在master、node1和node2节点中使用“yum”命令清除缓存并重新加载。(此处以master节点为例)
[root@master ~]# yum clean all
[root@master ~]# yum makecache
Repository Kylin-base is listed more than once in the configuration
Repository Kylin-exteas is listed more than once in the configuration
Repository docker is listed more than once in the configuration
Kylin-base 14 kB/s | 3.6 kB 00:00
Kylin-exteas 570 B/s | 2.9 kB 00:05
docker 13 kB/s | 3.5 kB 00:00
Kylin Linux Advanced Server 10 - Os 21 kB/s | 3.7 kB 00:00
Kylin Linux Advanced Server 10 - Updates 574 B/s | 2.9 kB 00:05
Metadata cache created.
# 请读者注意,如果使用“wget”命令或“curl”命令对仓库源进行加载,需要注意【$releasever】变量和【$basearch】变量的值,
# 国产操作系统的版本号或架构可能与其他发行版(如Centos、RedHat等)有区别。此处可将【$releasever】变量的值强制改为7,
# 在vim命令模式下键入【:%s/\\$releasever/7/g】进行替换。
下面列出两个变量的查询方式:
[root@master ~]# rpm -qi kylin-release
Name : kylin-release
Version : 10
Release : 24.6.p41.ky10
Architecture: x86_64
Install Date: Sat 09 Apr 2022 10:02:28 PM CST
Group : Unspecified
Size : 147802
License : Mulan PSL v1
Signature : RSA/SHA1, Mon 24 May 2021 08:22:13 PM CST, Key ID 41f8aebe7a486d9f
Source RPM : kylin-release-10-24.6.p41.ky10.src.rpm
Build Date : Mon 24 May 2021 08:05:28 PM CST
Build Host : kojibuilder3
Packager : Kylin Linux
Vendor : KylinSoft
Summary : kylin release file
Description :
kylin release files
[root@master ~]# arch
x86_64 #此处为$basearch的值
(4)在master、node1和node2节点中使用“yum”命令安装DOCKER,安装结束后,使
以上是关于基于麒麟SP10服务器版的Kubernetes集群安装的主要内容,如果未能解决你的问题,请参考以下文章