kubernetes— 记一次用kubeadm搭建kubernetes v1.9.0集群
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了kubernetes— 记一次用kubeadm搭建kubernetes v1.9.0集群相关的知识,希望对你有一定的参考价值。
参考技术A 目标:使用kubeadm搭建kubernetes v1.9.0集群操作系统:Ubuntu 16.04.3
Ubuntu-001 :192.168.1.110
ubuntu-002 : 192.168.1.106
步骤总结:
1、安装Docker CE
2、安装kubeadm、kubectl、kubelet
3、利用kubeadm init初始化kubernetes集群
4、利用kubeadm join加入node节点到集群
具体操作步骤:
在Ubuntu 16.04安装Docker CE (使用apt-get进行安装)
# step 1: 安装必要的一些系统工具
sudo apt-get update
sudo apt-get -y install apt-transport-httpsca-certificates curl software-properties-common
# step 2: 安装GPG证书
curl -fsSLhttp://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: 写入软件源信息
sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs)stable"
# Step 4: 更新并安装 Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce
# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
# apt-cache madison docker-ce
# docker-ce | 17.03.1~ce-0~ubuntu-xenial | http://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
# docker-ce | 17.03.0~ce-0~ubuntu-xenial | http://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
# Step 2: 安装指定版本的Docker-CE: (VERSION 例如上面的 17.03.1~ce-0~ubuntu-xenial)
# sudo apt-get -y install docker-ce=[VERSION]
安装kubelet kubeadm和kubectl
由于国内google被墙,因此无法按照官方文档操作,现添加aliyun源,可成功安装kubelet kubeadm和kubectl。
# step 1:安装必要的一些系统工具
apt-get update && apt-get install -y apt-transport-https
# step 2:安装GPG证书
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
# step 3:更新软件源信息
cat << EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
# step 4:更新并安装kubelet kubeadm kubectl
apt-get update
apt-get install -y kubelet kubeadm kubectl
# 或者安装指定版本kubelet kubeadm kubectl
apt-get install -y kubelet=1.9.6-00 kubeadm=1.9.6-00 kubectl=1.9.6-00
# step 5:设置kubelet自启动,并启动kubelet
systemctl enable kubelet && systemctl start kubelet
利用kubeadm初始化kubernetes集群
如果在国内的话,需要提前准备kubernetes的各镜像,具体参考: 在国内如何巧妙获取kubernetes各镜像?
root@Ubuntu-001:~# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetesversion: v1.9.0
[init] Using Authorizationmodes: [Node RBAC]
[preflight] Runningpre-flight checks.
[WARNING SystemVerification]: docker version is greater thanthe most recently validated version. Docker version: 17.12.0-ce. Max validatedversion: 17.03
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting thekubelet service
[certificates] Generated cacertificate and key.
[certificates] Generatedapiserver certificate and key.
[certificates] apiserverserving cert is signed for DNS names [ubuntu-001 kubernetes kubernetes.defaultkubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1192.168.1.110]
[certificates] Generatedapiserver-kubelet-client certificate and key.
[certificates] Generated sakey and public key.
[certificates] Generatedfront-proxy-ca certificate and key.
[certificates] Generatedfront-proxy-client certificate and key.
[certificates] Valid certificatesand keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfigfile to disk: "admin.conf"
[kubeconfig] Wrote KubeConfigfile to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfigfile to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfigfile to disk: "scheduler.conf"
[controlplane] Wrote StaticPod manifest for component kube-apiserver to"/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote StaticPod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote StaticPod manifest for component kube-scheduler to"/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Podmanifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for thekubelet to boot up the control plane as Static Pods from directory"/etc/kubernetes/manifests".
[init] This might take aminute or longer if the control plane images have to be pulled.
[apiclient] All control planecomponents are healthy after 38.006067 seconds
[uploadconfig] Storingthe configuration used in ConfigMap "kubeadm-config" in the"kube-system" Namespace
[markmaster] Will mark nodeubuntu-001 as master by adding a label and a taint
[markmaster] Masterubuntu-001 tainted and labelled with key/value:node-role.kubernetes.io/master=""
[bootstraptoken] Using token:3ef896.6fe4c166c546aa89
[bootstraptoken] ConfiguredRBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes toget long term certificate credentials
[bootstraptoken] ConfiguredRBAC rules to allow the csrapprover controller automatically approve CSRs froma Node Bootstrap Token
[bootstraptoken] ConfiguredRBAC rules to allow certificate rotation for all node client certificates inthe cluster
[bootstraptoken] Creating the"cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essentialaddon: kube-dns
[addons] Applied essentialaddon: kube-proxy
Your Kubernetes master hasinitialized successfully!
To start using your cluster,you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf$HOME/.kube/config
sudo chown $(id -u):$(id -g)$HOME/.kube/config
You should now deploy a podnetwork to the cluster.
Run "kubectl apply -f[podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any numberof machines by running the following on each node
as root:
kubeadm join --token 3ef896.6fe4c166c546aa89192.168.1.110:6443 --discovery-token-ca-cert-hashsha256:af25f24109d0c2fba55c7a126b83e3fce39d196a3d0a34c5ac0e14b06593e868
至此,master节点创建完毕
常见错误:
1、Port 2379被占用
[preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port2379 is in use
解决方法:netstat -anp|grep 2379查看是哪个进程在占用,2379是etcd的端口,很可能是多次执行导致。Kill掉该进程。
2、提示swap为打开状态
[ERROR Swap]: running with swap on is not supported. Please disableswap
[preflight] If you know what you are doing, you can make a checknon-fatal with `--ignore-preflight-errors=...`
解决方法:执行swapoff -a即可
3、其他错误
https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/
接下来,按照kubeadm init的输出打印配置
对于非root用户:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf$HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
root用户:
export KUBECONFIG=/etc/kubernetes/admin.conf
为了能够使得pod间可以相互通信,你需要安装pod network插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
一旦pod network安装成功,可以执行:
kubectl get pods --all-namespaces -o wide
利用kubeadm join加入Ubuntu-002节点到集群
Ubuntu-002节点安装Docker、kubeadm、kubectl、kubelet,并且本地已pull了kubernetes镜像。
根据kubeadm init最后输出的加入集群的命令kubeadm join,将Ubuntu-002节点加入集群成为node节点
root@Ubuntu-002:~# kubeadm join --token 6aefa6.a55aba3998eda615 192.168.1.110:6443--discovery-token-ca-cert-hashsha256:87c51fa417666a61195d7540c965a164f1e504fe0339fc7c107e36b0b26e31a7
[preflight] Runningpre-flight checks.
[WARNING SystemVerification]: docker version is greater thanthe most recently validated version. Docker version: 17.12.0-ce. Max validatedversion: 17.03
[WARNING FileExisting-crictl]: crictl not found in systempath
[discovery] Trying to connectto API Server "192.168.1.110:6443"
[discovery] Createdcluster-info discovery client, requesting info from"https://192.168.1.110:6443"
[discovery] Requesting infofrom "https://192.168.1.110:6443" again to validate TLS against thepinned public key
[discovery] Cluster infosignature and contents are valid and TLS certificate validates against pinnedroots, will use API Server "192.168.1.110:6443"
[discovery] Successfullyestablished connection with API Server "192.168.1.110:6443"
This node has joined thecluster:
* Certificate signing requestwas sent to master and a response
was received.
* The Kubelet was informed ofthe new secure connection details.
Run 'kubectl get nodes'on the master to see this node join the cluster.
node节点ubuntu-002加入集群,可以依次加入其他node节点。在master节点运行kubectl get nodes。
当然,如果想要在非master节点(node节点或者非集群远程主机)执行kubectl命令,需要
scp root@:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes
例如:
不然会出现
将node节点ubuntu-002 从集群中删除
1、kubectl drain ubuntu-002 --delete-local-data --force--ignore-daemonsets --kubeconfig ./admin.conf
2、kubectl delete node ubuntu-002 --kubeconfig admin.conf
3、kubeadm reset
以上是关于kubernetes— 记一次用kubeadm搭建kubernetes v1.9.0集群的主要内容,如果未能解决你的问题,请参考以下文章
记一次用PXE+kickstart批量为20台新服务器安装centos7