centos7.2 kubeadm1.15.3部署k8s环境
Posted fb010001
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了centos7.2 kubeadm1.15.3部署k8s环境相关的知识,希望对你有一定的参考价值。
2019.09.11 fb
环境准备
准备三个节点 一个master 两个node
- 192.168.122.193 master
- 192.168.122.194 node 01
- 192.168.122.195 node02
修改主机名
- hostname查看主机名
- 修改 /etc/sysconfig/network
- NETWORKING=yes
HOSTNAME=主机名
- NETWORKING=yes
配置DNS
1 vi /etc/sysconfig/network-scripts/ifcfg-eth0 添加 NM_CONTROLLED=no 重启网络服务 systemctl restart network 2.配置DNS 修改NetworkManager.conf 配置文件 vim /etc/NetworkManager/NetworkManager.conf 在[main]中添加 dns=no 修改resolv.conf配置文件 vim /etc/resolv.conf 添加 #主DNS服务器 阿里dns nameserver 223.5.5.5 #备DNS服务器 nameserver 114.114.114.114 重启NetworkManager systemctl restart NetworkManager
配置阿里云yum源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
通过NTP完成各节点时间同步
- 安装软件
- yum install chrony
- 同步时间
- systemctl start chronyd
- systemctl enable chronyd
- 查看时间同步源
- chronyc sources -v
chronyc sourcestats -v
- chronyc sources -v
- 安装软件
配置hosts 完成dns解析
cat /etc/hosts
[root@master ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.122.193 master 192.168.122.194 node01 192.168.122.195 node02 [root@master ~]#
关闭防火墙
[root@master ~]# systemctl stop firewalld [root@master ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service. [root@master ~]#
关闭selinux
修改配置
vi /etc/selinux/config #SELINUX=enforcing SELINUX=disabled SELINUXTYPE=targeted
重启
reboot
查看状态
[root@master ~]# getenforce Disabled [root@master ~]#
禁用swap
swapoff -a
卸载swap分区
sudo vim /etc/fstab 注释掉 swap 这一行
reboot
ssh互信
ssh-keygen -t rsa cd /root/.ssh 把各节点的 pub文件内容追加到 authorized_keys 文件中 把 authorized_keys 文件分发到各个节点的 /root/.ssh
安装Docker程序包
安装dockerce
配置yum源
cd /etc/yum.repos.d wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装dockerce
yum install docker-ce -y
为docker设置阿里云加速器
点击容器镜像服务 点击加速器 参考 https://www.cnblogs.com/zhxshseu/p/5970a5a763c8fe2b01cd2eb63a8622b2.html
修改 /usr/lib/systemd/system/docker.service
[Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker # Enviroment="HTTPS_PROXY=http://www.ik8s.io:10070" <----新加的代理配置 # Environment="NO_PROXY=127.0.0.0/8,192.168.122.0/24" <----新加的 不代理 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT <----新加的 防火墙策略
vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1sysctl -p /etc/sysctl.d/k8s.conf
docker 设置开机自启动
systemctl enable docker
安装kubeadm
配置yum源
cd /etc/yum.repos.d vim kubernetes.repo [kubernetes] name=Kubernetes Repository baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
检查配置是否正确
yum repolist 正确显示 [root@master yum.repos.d]# yum repolist Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com repo id repo name status base/7/x86_64 CentOS-7 - Base - mirrors.aliyun.com 10,019 docker-ce-stable/x86_64 Docker CE Stable - x86_64 56 extras/7/x86_64 CentOS-7 - Extras - mirrors.aliyun.com 435 updates/7/x86_64 CentOS-7 - Updates - mirrors.aliyun.com 2,500
软件安装(master节点)
yum install kubeadm kubectl kubelet
初始化集群
初始化master
swap启用情况处理的方法
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
安装master
查看初始化参数
kubeadm config print init-defaults [root@master ~]# kubeadm config print init-defaults apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 1.2.3.4 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: master taints: - effect: NoSchedule key: node-role.kubernetes.io/master ------ apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io <-- 默认使用的仓库 注意项!!! kind: ClusterConfiguration kubernetesVersion: v1.15.0 <-- 版本 注意项!!! networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 scheduler: [root@master ~]#
干跑测试
kubeadm init --kubernetes-version="1.15.3" --pod-network-cidr="10.244.0.0/16" --dry-run
各节点提前下载镜像
查看镜像
[root@master fb]# kubeadm config images list
W0913 11:06:24.804608 6687 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0913 11:06:24.804748 6687 version.go:99] falling back to the local client version: v1.15.3
k8s.gcr.io/kube-apiserver:v1.15.3
k8s.gcr.io/kube-controller-manager:v1.15.3
k8s.gcr.io/kube-scheduler:v1.15.3
k8s.gcr.io/kube-proxy:v1.15.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
[root@master fb]#
[root@master fb]#
[root@master fb]#
[root@master fb]#
[root@master fb]#
[root@master fb]#
[root@master fb]#
[root@node-2 opt]# cat k8s-image-download.sh
#!/bin/bash
# liyongjian5179@163.com
# download k8s 1.15.3 images
# get image-list by 'kubeadm config images list --kubernetes-version=v1.15.3'
# gcr.azk8s.cn/google-containers == k8s.gcr.io
if [ $# -ne 1 ];then
echo "please user in: ./`basename $0` KUBERNETES-VERSION"
exit 1
fi
version=$1
images=`kubeadm config images list --kubernetes-version=$version |awk -F'/' 'print $2'`
for imageName in $images[@];do
docker pull gcr.azk8s.cn/google-containers/$imageName
docker tag gcr.azk8s.cn/google-containers/$imageName k8s.gcr.io/$imageName
docker rmi gcr.azk8s.cn/google-containers/$imageName
done
[root@node-2 opt]#./k8s-image-download.sh 1.15.3
参考 https://www.cnblogs.com/liyongjian5179/p/11417794.html
下载完镜像查看镜像
[root@master fb]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 3 weeks ago 82.4MB
k8s.gcr.io/kube-apiserver v1.15.3 5eb2d3fc7a44 3 weeks ago 207MB
k8s.gcr.io/kube-scheduler v1.15.3 703f9c69a5d5 3 weeks ago 81.1MB
k8s.gcr.io/kube-controller-manager v1.15.3 e77c31de5547 3 weeks ago 159MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 8 months ago 40.3MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 9 months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 21 months ago 742kB
[root@master fb]#
- 部署主节点
kubeadm init --kubernetes-version="1.15.3" --pod-network-cidr="10.244.0.0/16"
部署完成
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.122.193:6443 --token hasvlr.9puhkyn3pzbhqfj7 --discovery-token-ca-cert-hash sha256:5d1d4cb9b84debc570757193298fd5c89ce9e0bbbdf9f397de81ce1d671db0bb [root@master fb]#
创建配置项
cd mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
安装flannel组件
安装flannel组件前 [root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady master 5m1s v1.15.3 [root@master ~]#
安装flannel组件 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
安装flannel完成后,节点状态变为Ready,节点状态正常了。 [root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 13m v1.15.3 [root@master ~]#
初始化node节点
安装软件
yum install kubeadm kubectl
加入到集群中
kubeadm join 192.168.122.193:6443 --token hasvlr.9puhkyn3pzbhqfj7 --discovery-token-ca-cert-hash sha256:5d1d4cb9b84debc570757193298fd5c89ce9e0bbbdf9f397de81ce1d671db0bb
[root@node01 ~]# kubeadm join 192.168.122.193:6443 --token hasvlr.9puhkyn3pzbhqfj7 > --discovery-token-ca-cert-hash sha256:5d1d4cb9b84debc570757193298fd5c89ce9e0bbbdf9f397de81ce1d671db0bb [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@node01 ~]# docker pull gcr.azk8s.cn/google-containers/pause:3.1 docker tag gcr.azk8s.cn/google-containers/pause:3.1 k8s.gcr.io/pause:3.1 docker rmi gcr.azk8s.cn/google-containers/pause:3.1 docker pull gcr.azk8s.cn/google-containers/kube-proxy:v1.15.3 docker tag gcr.azk8s.cn/google-containers/kube-proxy:v1.15.3 k8s.gcr.io/kube-proxy:v1.15.3 docker rmi gcr.azk8s.cn/google-containers/kube-proxy:v1.15.3 docker pull quay.io/coreos/flannel:v0.11.0-amd64
说明
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09 该版本认证的docker 版本为 18.09,当前版本为 19.03.2
部署完成
查看结果
[root@master ~]# kubectl -n kube-system get nodes NAME STATUS ROLES AGE VERSION master Ready master 3h48m v1.15.3 node01 Ready <none> 49m v1.15.3 node02 Ready <none> 22m v1.15.3 [root@master ~]#
说明
master节点需要的镜像列表
[root@master fb]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 3 weeks ago 82.4MB
k8s.gcr.io/kube-apiserver v1.15.3 5eb2d3fc7a44 3 weeks ago 207MB
k8s.gcr.io/kube-controller-manager v1.15.3 e77c31de5547 3 weeks ago 159MB
k8s.gcr.io/kube-scheduler v1.15.3 703f9c69a5d5 3 weeks ago 81.1MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 7 months ago 52.5MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 8 months ago 40.3MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 9 months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 21 months ago 742kB
[root@master fb]#
node节点需要的镜像列表
[root@node02 home]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 3 weeks ago 82.4MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 7 months ago 52.5MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 21 months ago 742kB
[root@node02 home]#
以上是关于centos7.2 kubeadm1.15.3部署k8s环境的主要内容,如果未能解决你的问题,请参考以下文章
Openstack Mitaka for Centos7.2 部署指南