Centos8安装在线及离线K8S集群搭建

Posted IT那活儿

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Centos8安装在线及离线K8S集群搭建相关的知识,希望对你有一定的参考价值。

点击上方 蓝字 关注我们
Centos8安装在线及离线K8S集群搭建


 1. 配   置  

OS:centos8

kernel:4.18.0-147.8.1.el8_1.x86_64

IP:

192.168.37.128 k8s1

192.168.37.130 k8s2

192.168.37.131 k8s3

注意:安装K8S需要Linux内核3.10以上,不然会安装失败


2.使用kubeadm部署kubernetes集群方法

(主要使用在线安装)

2.1   配置主机名

hostnamectl set-hostname k8s1

hostnamectl set-hostname k8s2

hostnamectl set-hostname k8s3


2.2   配置IP地址

DEVICE=eth0

TYPE=Ethernet

ONBOOT=yes

BOOTPROTO=static

IPADDR=192.168.37.XXX

NETMASK=255.255.255.0

GATEWAY=192.168.37.2


2.3  主机名称解析

cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

#install_add

192.168.37.129 k8s1

192.168.37.130 k8s2

192.168.37.131 k8s3


2.4  主机安全配置

关闭firewalld

systemctl stop firewalld

systemctl disable firewalld

firewall-cmd --state


SELINUX配置(需要重启主机)

sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config


永久关闭swap分区(使用kubeadm部署必须关闭swap分区,修改配置文件后需要重启操作系统)

cat /etc/fstab


#

# /etc/fstab

# Created by anaconda on Sun May 10 07:55:21 2020

#

# Accessible filesystems, by reference, are maintained under '/dev/disk/'.

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.

#

# After editing this file, run 'systemctl daemon-reload' to update systemd

# units generated from this file.

#

/dev/mapper/cl-root / xfs defaults 0 0

UUID=ed5f7f26-6aef-4bb2-b4df-27e46ee612bf /boot ext4 defaults 1 2

/dev/mapper/cl-home /home xfs defaults 0 0

#/dev/mapper/cl-swap swap swap defaults 0 0

在swap文件系统对应的行,行首添加#表示注释

#free -m

total used free shared buff/cache available

Mem: 1965 1049 85 9 830 771

Swap: 0 0 0


2.5  添加网桥过滤

cat /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_

forward = 1

vm.swappiness = 0


加载br_netfilter模块

modprobe br_netfilter


查看模块

lsmod | grep br_netfilter


使配置文件生效

sysctl -p /etc/sysctl.d/k8s.conf


2.6   开启ipvs

安装ipset及ipvsadm

yum -y install ipset ipvsadm


在所有节点添加ipvs模块(所有节点执行)

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

EOF


加载并检查模块

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod

| grep -e ip_vs -e nf_conntrack_ipv4


2.7   安装docker-ce版本

配置docker yum源

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/dockerce/linux/centos/docker-ce.repo


查看合适的docker版本,本次安装最新的版本

yum list docker-ce.x86_64 --showduplicates | sort -r


安装docker

yum -y install docker


2.8   修改docker配置文件

1.主要修改ExecStart位置,修改默认docker存储位置

cat /usr/lib/systemd/system/docker.service


[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

BindsTo=containerd.service

After=network-online.target firewalld.service containerd.service

Wants=network-online.target

Requires=docker.socket


[Service]

Type=notify

# the default is not to use systemd for cgroups because the delegate issues still

# exists and systemd currently does not support the cgroup feature set required

# for containers run by docker

ExecStart=/usr/bin/dockerd --graph /data/docker

ExecReload=/bin/kill -s HUP $MAINPID

TimeoutSec=0

RestartSec=2

Restart=always


# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.

# Both the old, and new location are accepted by systemd 229 and up, so using the old location

# to make them work for either version of systemd.

StartLimitBurst=3


# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.

# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make

# this option work for either version of systemd.

StartLimitInterval=60s


# Having non-zero Limit*s causes performance problems due to accounting overhead

# in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity


# Comment TasksMax if your systemd version does not support it.

# Only systemd 226 and above support this option.

TasksMax=infinity


# set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes


# kill only the docker process, not all processes in the cgroup

KillMode=process


[Install]

WantedBy=multi-user.target


2.添加修改daemon.json文件,修改默认存储驱动及国内镜像

cat /etc/docker/daemon.json

{

"exec-opts": ["native.cgroupdriver=systemd"],

"log-driver": "json-file",

"log-opts": {

"max-size": "100m"

},

"storage-driver": "overlay2",

"storage-opts": [

"overlay2.override_kernel_check=true"

],

"registry-mirrors": [

"https://registry.docker-cn.com",

"http://hub-mirror.c.163.com",

"https://docker.mirrors.ustc.edu.cn"

]

}


3.配置完后,重新reload json文件及重启docker

systemctl daemon-reload

systemctl restart docker


使用docker info查看Registry Mirrors是不是修改成功


2.9  安装kubectl,kubeadm,kubelet软件

配置阿里云的yum K8S源(注意gpgkey位置要https对齐,不然源加载不出来)

cat kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg


执行安装

yum -y install kubectl kubeadm kubelet


2.10   软件设置

主要配置kubelet,如果不配置可能会导致k8s集群无法启动。为了实现docker使用的cgroupdriver与kubelet使用的

cgroup的一致性,建议修改如下文件内容。

vim /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"


设置为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动

systemctl enable kubelet


2.11   k8s集群容器镜像准备

1.执行kubeadm config images list 查看K8S集群需要的docker镜像

kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.18.2

k8s.gcr.io/kube-controller-manager:v1.18.2

k8s.gcr.io/kube-scheduler:v1.18.2

k8s.gcr.io/kube-proxy:v1.18.2

k8s.gcr.io/pause:3.2

k8s.gcr.io/etcd:3.4.3-0

k8s.gcr.io/coredns:1.6.7


2.使用docker pull方式拉取以上镜像(拉取阿里云镜像)

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7


3.查看已下载的镜像

docker iamges

REPOSITORY TAG IMAGE ID CREATED SIZE

calico/node latest 7695a13607d9 7 days ago 263MB

calico/cni latest c6f3d2c436a7 7 days ago 225MB

haproxy latest c033852569f1 3 weeks ago 92.4MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.18.2 0d40868643c6 4 weeks ago 117MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.18.2 6ed75ad404bd 4 weeks ago 173MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.18.2 a3099161e137 4 weeks ago 95.3MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.18.2 ace0a8c17ba9 4 weeks ago 162MB

osixia/keepalived latest d04966a100a7 2 months ago 72.9MB

registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 3 months ago 683kB

registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 3 months ago 43.8MB

registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 6 months ago 288MB

calico/pod2daemon-flexvol v3.9.0 aa79ce3237eb 8 months ago 9.78MB

calico/cni v3.9.0 56c7969ed8e6 8 months ago 160MB

calico/kube-controllers v3.9.0 f5cc48269a09 8 months ago 50.4MB


2.12   拉取worker节点docker镜像

worker节点只要kube-proxy/pause这两个镜像则可(其他worker节点执行以下命令)

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2


拉取后执行 docker images查看


2.13   初始化K8S集群

kubeadm init --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.37.128


输出日志如下:

I0920 13:31:38.444013 59901 version.go:252] remote version is much newer: v1.19.2; falling back to: stable-1.18

W0920 13:31:40.534993 59901 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

[init] Using Kubernetes version: v1.18.9

[preflight] Running pre-flight checks

   [WARNING FileExisting-tc]: tc not found in system path

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'


[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [k8s1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.37.128]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [k8s1 localhost] and IPs [192.168.37.128 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [k8s1 localhost] and IPs [192.168.37.128 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

W0920 13:33:01.598426 59901 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

[control-plane] Creating static Pod manifest for "kube-scheduler"

W0920 13:33:01.606176 59901 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[apiclient] All control plane components are healthy after 19.504561 seconds

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node k8s1 as control-plane by adding the label "node-role.kubernetes.io/master=''"

[mark-control-plane] Marking the node k8s1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[bootstrap-token] Using token: alu9wy.79pfunrsnxgvle0b

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy


Your Kubernetes control-plane has initialized successfully!


To start using your cluster, you need to run the following as a regular user:


mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/


Then you can join any number of worker nodes by running the following on each as root:


kubeadm join 192.168.37.128:6443 --token alu9wy.79pfunrsnxgvle0b

--discovery-token-ca-cert-hash sha256:8bc468f16a049ea94b4659bc2c58a6ddb5b4a2a53eff98051442363d585e3358


参数解释:

--image-repository 因为是从阿里云拉取的docker镜像,需要指定仓库来启动

--pod-network-cidr 指定pod内部的tcp网络


执行完后,根据提示信息执行步骤

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


2.14   拉取calico镜像及配置文件

通过docker pull拉取calico镜像

docker pull calico/node

docker pull calico/cni

docker pull calico/pod2daemon-flexvol

docker pull calico/kube-controllers


下载calico.yml文件

wget https://docs.projectcalico.org/manifests/calico.yaml


2.15 修改calico.yml文件

在配置文件中autodetect标签下添加以下(一定要注意使用空格,不能使用tab,yml是强格式文件)

- name: IP_AUTODETECTION_METHOD

- name: CALICO_IPV4POOL_CIDR

value: "172.16.0.0/16"


修改完后,应用

kubectl apply -f calico.yml


2.16添加其他worker节点到master节点

kubeadm join 192.168.37.128:6443 --token alu9wy.79pfunrsnxgvle0b

--discovery-token-ca-cert-hash sha256:8bc468f16a049ea94b4659bc2c58a6ddb5b4a2a53eff98051442363d585e3358


执行完后,在master节点使用kubectl get nodes查看K8S集群状态

NAME STATUS ROLES AGE VERSION

k8s1 Ready master 3d6h v1.18.2

k8s2 Ready <none> 3d6h v1.18.2

k8s3 Ready <none> 23h v1.18.2


查看集群信息

kubectl get cs

NAME STATUS MESSAGE ERROR

scheduler Healthy ok

controller-manager Healthy ok

etcd-0 Healthy {"health":"true"}


这样一个K8S集群就搭建完了


下面来说说离线安装,一般生产库是没有连接外网的,则需要通过离线方式进行安装

3.K8S离线安装


1.离线安装主要通过保存上面的docker镜像,然后上传到没有网络的地方进行加载

保存docker镜像,主要为docker save -o命令

docker save -o calico_node.tar calico/node:latest


加载docker镜像,主要为docker load -i命令

docker load -i calico-node.tar


2.而离线K8S二进制包可以使用如下方式保存在本地,把所有下载的都上传至内网中进行安装,避免缺少依赖包而从装失败

yumdownloader --resolve kubelet kubeadm kubectl


3.离线安装步骤

离线安装步骤与在线安装初始化K8S一致,不再赘述.


4.K8S安装遇到的问题汇总解决


1. 执行kubectl命令报错

Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

解决:

这个为admin.conf文件不一致导致,可把$HOME/.kube文件删除,再从/etc/kubernetes/admin.conf拷贝到该目录即可

rm -rf $HOME/.kube

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


2.kubelet日志里面报错

Failed to get system container stats for “/system.slice/docker.service”: failed to get cgroup stats for “/system.slice/docker.service”: failed to get container info for “/system.slice/docker.service”: unknown container “/system.slice/docker.service”

解决:

受低版本的操作系统影响,cgroup-driver参数应该通过kubelet 的配置指定配置文件来配置

编辑kubelet文件

vim /etc/sysconfig/kubelet

添加参数

--kubelet-cgroups=/systemd/system.slice

重启kubelet

systemctl restart kubelet


END




以上是关于Centos8安装在线及离线K8S集群搭建的主要内容,如果未能解决你的问题,请参考以下文章

C++搭建集群聊天室:用户单聊及离线消息处理功能实现

centos8搭建k8s集群

云原生 | Kubernetes篇自建高可用k8s集群搭建

kubeadm搭建高可用K8s集群

docker 搭建es+es基本使用

kubernetes— 记一次用kubeadm搭建kubernetes v1.9.0集群