linux12企业实战 -- 12ubuntu部署K8s集群

Posted FikL-09-19

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了linux12企业实战 -- 12ubuntu部署K8s集群相关的知识,希望对你有一定的参考价值。

0.架构图

1. 网络和系统设置

1.1 /etc/profile代理

sudo vi /etc/profile

将下面的代理配置注释掉

source /etc/profile

1.2 wget代理

sudo vi /etc/wgetrc

1.3 清空环境变量

env | grep -i proxy

unset HTTPS_PROXY
unset HTTP_PROXY
unset FTP_PROXY
unset https_proxy
unset http_proxy
unset ftp_proxy

改完之后尝试一下wget ``https://www.baidu.comcurl ``https://www.baidu.com看是否成功

1.4 设置DNS

暂时不改

1.5 NTP

配置:sudo vim /etc/ntp.conf

查看状态:sudo service ntp status

1.6 时区选择

tzselect – 这一步可以不做

sudo cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

2. apt源更新

#首先进行配置文件的备份
sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
#编辑配置文件
sudo vim /etc/apt/sources.list
#在配置文件中开头添加以下内容(阿里源)
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
#执行命令更新一下
sudo apt-get update
sudo apt-get upgrade

3. 修改hostname(重要)

sudo vi /etc/hostname
sudo hostname xxxxxx
执行hostname指令查看当前hostname是否与etc中文件内容相同

4. 安装docker和docker-compose

4.1 安装docker

https://www.runoob.com/docker/ubuntu-docker-install.html

# 安装 apt 依赖包,用于通过HTTPS来获取仓库:
sudo apt-get install \\
    apt-transport-https \\
    ca-certificates \\
    curl \\
    gnupg-agent \\
    software-properties-common
# 添加 Docker 的官方 GPG 密钥:
curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# 使用以下指令设置稳定版仓库
sudo add-apt-repository \\
   "deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/ \\
  $(lsb_release -cs) \\
  stable"
# 在仓库中列出可用版本
sudo apt-cache madison docker-ce
# 不指定版本安装
sudo apt-get install docker-ce docker-ce-cli containerd.io
# 安装指定版本
sudo apt-get install docker-ce=5:20.10.7~3-0~ubuntu-bionic docker-ce-cli=5:20.10.7~3-0~ubuntu-bionic containerd.io

4.2 设置docker加速源地址

https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors?accounttraceid=3767de09dfb046488df1eb0b8c456adcfief

# 我的加速地址
https://a981dk3d.mirror.aliyuncs.com

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon. <<-'EOF'

  "registry-mirrors": ["https://a981dk3d.mirror.aliyuncs.com"]

EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

查看配置是否生效

sudo docker info

4.3 普通用户加入docker用户组 - 需要退出重登录

# 创建docker组
sudo groupadd docker
# 用户加入docker组
sudo usermod -aG docker 用户名
# 重启
sudo service docker restart
# 验证

4.4 docker开机启动

sudo systemctl enable docker

4.5 安装docker-compose

https://github.com/docker/compose/tags 查看版本

  • github版本:
sudo curl -L https://github.com/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
  • daocloud版本:
# 下载docker-compose文件
sudo curl -L https://get.daocloud.io/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

# 添加可执行权限,如果不添加可执行权限,执行 docker-compose 命令会提示权限不够。
sudo chmod +x /usr/local/bin/docker-compose

# 查看docker-compose版本
sudo docker-compose --version

4.6 其他

# 安装网络工具
sudo apt install bridge-utils 
# 装了之后可以用brctl命令看docker网络
brctl show

4.7 允许私有源

可以将8.3节提前到这里来执行,避免安装完了k8s再重启docker

5. 安装k8s

5.1 检查项

1. 防火墙

sudo ufw status

2. 关闭swap

sudo sed -i '/ swap / s/^\\(.*\\)$/#\\1/g' /etc/fstab
sudo swapoff -a

3. selinux

没装,不用关

5.2 安装k8s

# 添加秘钥
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add

# 使用以下指令设置仓库
sudo add-apt-repository \\
"deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main"

# 更新源(上面的命令先添加秘钥再设置仓库之后可以不用再更新)
# sudo apt update

# 列出可用版本
sudo apt-cache madison kubelet

# 安装指定版本
sudo apt-get install -y kubelet=1.18.17-00 kubeadm=1.18.17-00 kubectl=1.18.17-00 --allow-unauthenticated

5.3 启动k8s

# 导出配置文件
kubeadm config print init-defaults > init.default.yaml

# 修改主节点IP
advertiseAddress改成自己master节点IP

# 修改国内阿里镜像地址
daocloud.io/daocloud

# 修改pod网段配置
podSubnet: 192.168.0.0/16 ----> 不要加引号,图不对
serviceSubnet: 10.96.0.0/12 ----> 这一行上面新增
# 不同网络插件网段不一样,calico是192.168.0.0/16

5.4 拉取镜像

# 检查镜像列表
kubeadm config images list --config init.default.yaml
# 拉取镜像
kubeadm config images pull --config init.default.yaml

5.5 部署master节点

sudo kubeadm init --config=init.default.yaml

# 启动成功后能看到
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.23.3.188:6443 --token abcdef.0123456789abcdef \\
    --discovery-token-ca-cert-hash sha256:bf173d547377c04bcb45df56e68e42524ef3340c119982df1979b3ac219ef3ec
  • 配置证书
# 配置用户证书(master节点)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 删除污点
# 删除master节点的污点策略
kubectl taint nodes --all node-role.kubernetes.io/master-
  • 其他节点加入(这一步也可以在部署calico之后)
# 测试环境
sudo kubeadm join 172.23.3.188:6443 --token abcdef.0123456789abcdef \\
    --discovery-token-ca-cert-hash sha256:bf173d547377c04bcb45df56e68e42524ef3340c119982df1979b3ac219ef3ec
# 开发环境
kubeadm join 172.23.3.168:6443 --token abcdef.0123456789abcdef \\
    --discovery-token-ca-cert-hash sha256:dc27b5c1f27f82d50c7b8ea77481272a7300e67a40af6a4a9902b5a2ddfcf388

5.6 部署calico

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# 等docker拉完镜像后查看是否完成启动
kubectl get pod -n kube-system
# 查看master节点是否完成
kubectl get node

成功之后的情况如下

如果集群起不来,可以看一下pod的状态

kubectl describe pods -n kube-system kube-proxy-9r7tf
# 下图可以看到是在拉一个很大的文件

6. 安装rancher

6.1 docker安装

sudo docker run -d --privileged --restart=unless-stopped \\
  -p 80:80 -p 443:443 \\
  rancher/rancher:latest

6.2 设置

进入页面,设置密码admin/admin

6.3 导入已有的k8s集群

curl --insecure -sfL https://172.23.3.186/v3/import/kbcbvrmrjmcd6s8j74vljcrsx4jqvkl9k6kgfxjrw77czxfz8x9d99_c-wvnhj.yaml | kubectl apply -f - 

查看状态

kubectl get pod -n cattle-system
kubectl describe pods -n cattle-system cattle-cluster-agent-65d9b67dc-2pc59

6.4 K8S 健康检查报错controller-manager Unhealthy

注释掉/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml的 – port=0。

改完不需要重启,直接生效

7. 安装harbor

https://goharbor.io/docs/2.0.0/install-config/download-installer/

https://github.com/goharbor/harbor/releases

https://blog.csdn.net/zyl290760647/article/details/83752877

7.1 下载安装包

下载并上传到服务器,解压缩

# 解压缩
tar -xzvf harbor-offline-installer-v2.3.0.tgz

修改harbor文件夹内的harbor.yml.tmpl文件名

mv harbor.yml.tmpl harbor.yml

修改截图中hostname为本机IP地址,注释掉所有https内容

sudo ./install.sh

7.2 登录

http://172.23.3.161
admin/Harbor12345

7.3 创建项目

创建7个项目

7.4 搬运镜像

在172.23.3.105机器搬运,需要修改docker的daemon

把要拉取的仓库地址和要推送的仓库地址都加入daemon.

# cat /etc/docker/daemon. 

        "registry-mirrors": ["https://a981dk3d.mirror.aliyuncs.com"],
        "insecure-registries": ["172.23.3.107", "172.23.3.161"]

sudo systemctl daemon-reload
sudo systemctl restart docker
# 登录
docker login 172.23.3.161
# 拉取
docker pull 172.23.3.107/baas/dev-server:0.0.4
docker pull 172.23.3.107/baas-fabric/fabric-explorer:v1.0.0
docker pull 172.23.3.107/baas-fabric/fabric-rest:v1.0.0.14-mj2
docker pull 172.23.3.107/grafana/grafana:4.6.3
docker pull 172.23.3.107/hyperledger/fabric-kafka:0.4.14
docker pull 172.23.3.107/hyperledger/fabric-zookeeper:0.4.14
docker pull 172.23.3.107/hyperledger/fabric-baseos:0.4.14
docker pull 172.23.3.107/hyperledger/fabric-ccenv:1.4
docker pull 172.23.3.107/hyperledger/fabric-ca:1.4
docker pull 172.23.3.107/hyperledger/fabric-peer:1.4
docker pull 172.23.3.107/hyperledger/fabric-orderer:1.4
docker pull 172.23.3.107/prom/alertmanager:v0.14.0
docker pull 172.23.3.107/prom/prometheus:v2.1.0
docker pull 172.23.3.107/prom/node-exporter:v0.15.2
docker pull 172.23.3.107/timonwong/prometheus-webhook-dingtalk:v0.3.0


docker tag 172.23.3.107/baas/dev-server:0.0.4  172.23.3.161/baas/dev-server:0.0.4
docker tag 172.23.3.107/baas-fabric/fabric-explorer:v1.0.0 172.23.3.161/baas-fabric/fabric-explorer:v1.0.0
docker tag 172.23.3.107/baas-fabric/fabric-rest:v1.0.0.14-mj2 172.23.3.161/baas-fabric/fabric-rest:v1.0.0.14-mj2
docker tag 172.23.3.107/grafana/grafana:4.6.3 172.23.3.161/grafana/grafana:4.6.3
docker tag 172.23.3.107/hyperledger/fabric-kafka:0.4.14 172.23.3.161/hyperledger/fabric-kafka:0.4.14 
docker tag 172.23.3.107/hyperledger/fabric-zookeeper:0.4.14 172.23.3.161/hyperledger/fabric-zookeeper:0.4.14
docker tag 172.23.3.107/hyperledger/fabric-baseos:0.4.14 172.23.3.161/hyperledger/fabric-baseos:0.4.14
docker tag 172.23.3.107/hyperledger/fabric-ccenv:1.4 172.23.3.161/hyperledger/fabric-ccenv:1.4
docker tag 172.23.3.107/hyperledger/fabric-ca:1.4 172.23.3.161/hyperledger/fabric-ca:1.4
docker tag 172.23.3.107/hyperledger/fabric-peer:1.4 172.23.3.161/hyperledger/fabric-peer:1.4
docker tag 172.23.3.107/hyperledger/fabric-orderer:1.4 172.23.3.161/hyperledger/fabric-orderer:1.4
docker tag 172.23.3.107/prom/alertmanager:v0.14.0 172.23.3.161/prom/alertmanager:v0.14.0
docker tag 172.23.3.107/prom/prometheus:v2.1.0 172.23.3.161/prom/prometheus:v2.1.0
docker tag 172.23.3.107/prom/node-exporter:v0.15.2 172.23.3.161/prom/node-exporter:v0.15.2
docker tag 172.23.3.107/timonwong/prometheus-webhook-dingtalk:v0.3.0 172.23.3.161/timonwong/prometheus-webhook-dingtalk:v0.3.0


docker push 172.23.3.161/baas/dev-server:0.0.4
docker push 172.23.3.161/baas-fabric/fabric-explorer:v1.0.0
docker push 172.23.3.161/baas-fabric/fabric-rest:v1.0.0.14-mj2
docker push 172.23.3.161/grafana/grafana:4.6.3
docker push 172.23.3.161/hyperledger/fabric-kafka:0.4.14 
docker push 172.23.3.161/hyperledger/fabric-zookeeper:0.4.14
docker push 172.23.3.161/hyperledger/fabric-baseos:0.4.14
docker push 172.23.3.161/hyperledger/fabric-ccenv:1.4
docker push 172.23.3.161/hyperledger/fabric-ca:1.4
docker push 172.23.3.161/hyperledger/fabric-peer:1.4
docker push 172.23.3.161/hyperledger/fabric-orderer:1.4
docker push 172.23.3.161/prom/alertmanager:v0.14.0
docker push 172.23.3.161/prom/prometheus:v2.1.0
docker push 172.23.3.161/prom/node-exporter:v0.15.2
docker push 172.23.3.161/timonwong/prometheus-webhook-dingtalk:v0.3.0

8. 普罗米修斯

8.1 修改配置文件

修改镜像地址

# 修改各个yaml文件中的镜像地址,使其符合镜像仓库地址,即形如registry.paas/cmss/镜像名:版本号
# 共有5处需要修改

修改hostname

kubectl get pods -o wide --namespace=kube-system | grep dns

# 修改prometheus和alermanager的yaml文件中hostname处,为上述dns的node name,比如node189
# 共有2处需要修改

修改web.external-url

# 修改peometheus的yaml文件中--web.external-url处,IP地址为部署prometheus的节点IP地址,端口不用改

8.2 上传到服务器并解压缩

mkdir prometheus_yaml
unzip -j xxx.zip -d prometheus_yaml/

8.3 修改docker允许私有源

# cat /etc/docker/daemon. 

        "registry-mirrors": ["https://a981dk3d.mirror.aliyuncs.com"],
        "insecure-registries": ["172.23.3.161"]

sudo systemctl daemon-reload
sudo systemctl restart docker
# 这个过程会导致整个k8s重启,耐心等待。。。

8.4 启动prometheus

sudo kubectl create ns baas-mon
ls | awk 'print "sudo kubectl apply -f "$1" --namespace=baas-mon"'|sh
# 不加sudo也行

8.5 赋予权限

这一步很重要,否则8.6看不到K8s的节点

# 可以查看pod是否启动完成
kubectl get pods -o wide --namespace=baas-mon
# 查看某个具体的节点
kubectl describe pods -n baas-mon alertmanager-77dc6858cc-d88qb
# 完成部署后,赋予baas-mon ServiceAccount权限
kubectl create clusterrolebinding prometheus-binding --clusterrole=cluster-admin --user=system:serviceaccount:baas-mon:default

8.6 确认prometheus正常

登录http://172.23.3.188:31016/,查看targets

9. 提供信息

9.1 从rancher获取k8s信息

9.2 获取集群IP

# 利用kubectl -n kube-system get svc -l k8s-app=kube-dns查集群IP
kubectl -n kube-system get svc -l k8s-app=kube-dns

ip:https://172.23.3.186/k8s/clusters/c-wvnhj
token:kubeconfig-user-5xs8t:s9vn9lw5tv2cj4nwfvb2dqfxzwn87kj488rqp6lpmqwc4sndpl2gqk
DNS:10.96.0.10
Harbor:172.23.3.161
NFS:172.23.3.161
NFS目录:/home/secu/nfs
prometheus:172.23.3.188:31016

10. 安装nfs

# 在服务端执行
sudo apt install nfs-kernel-server
# 在客户端执行
sudo apt install nfs-common
# 在服务端执行
sudo vim /etc/exports
加入以下内容
/home/secu/nfs *(rw,sync,no_root_squash)
(要创建/home/secu/nfs目录)
# 启动
sudo /etc/init.d/nfs-kernel-server restart

# 在客户端执行:
cd ~
sudo mount -t nfs -o nolock 172.23.3.161:/home/secu/ .
# 注意这个mount命令有问题,会造成/home/secu整个目录变成可读不可写状态

## 正确的命令是:
cd ~
mkdir nfs
sudo mount -t nfs -o nolock 172.23.3.161:/home/secu/nfs /home/secu/nfs

######## 踩坑 ##########

172.23.3.187 - 172.23.3.190共4台机器的home目录/home/secu/都变成了远程nfs目录

补救措施:

sudo mkdir /home/data
sudo chown -R secu:secu /home/data

最终解决:

# 重启大法
sudo reboot

其他

# 7.13重启3台k8s机器,重启后恢复k8s集群命令:
secu@node188:~$ sudo systemctl daemon-reload
secu@node188:~$ sudo systemctl start kubelet
# 不需要其他命令,耐心等待集群恢复


# 关于kubectl用sudo和不用sudo结果不一样的问题
https://www.it1352.com/1534977.html
By default, kubectl looks in ~/.kube/config (or the file pointed to be $KUBECONFIG) to determine what server to connect to. Your home directory and environment are different when running commands as root. When no connection info is found, kubectl defaults to localhost:8080
0共4台机器的home目录/home/secu/都变成了远程nfs目录

补救措施:

sudo mkdir /home/data
sudo chown -R secu:secu /home/data


最终解决:

重启大法

sudo reboot


# 其他

7.13重启3台k8s机器,重启后恢复k8s集群命令:

secu@node188:~$ sudo systemctl daemon-reload
secu@node188:~$ sudo systemctl start kubelet

不需要其他命令,耐心等待集群恢复

关于kubectl用sudo和不用sudo结果不一样的问题

https://www.it1352.com/1534977.html
By default, kubectl looks in ~/.kube/config (or the file pointed to be $KUBECONFIG) to determine what server to connect to. Your home directory and environment are different when running commands as root. When no connection info is found, kubectl defaults to localhost:8080

以上是关于linux12企业实战 -- 12ubuntu部署K8s集群的主要内容,如果未能解决你的问题,请参考以下文章

linux12企业实战 -- 43topic申请并部署对应的服务

linux12企业实战 -- 44ESrpm包部署

linux12企业实战 -- 32haproxy2.0.12

linux12企业实战 -- 26qa环境部署orderhub 中间件

12.Jenkins持续集成企业实战

linux12企业实战 -- 49优化