手工离线部署k8s(v1.9)
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了手工离线部署k8s(v1.9)相关的知识,希望对你有一定的参考价值。
手工离线部署k8s(v1.9)
1. 环境准备(采用一个master节点+两个node节点)
master 192.168.2.40
node-1 192.168.2.41
node-2 192.168.2.42
绑定hosts
2.将master和node-1、node-2绑定hosts
#vi /etc/hosts
192.168.2.40 master
192.168.2.41 node-1
192.168.2.42 node-2
3. master节点与node节点ssh密码登录
[[email protected] ~]# ssh-keygen [[email protected] ~]# ssh-copy-id node-1 [[email protected] ~]# ssh-copy-id node-2
4.关闭所有服务器防火墙和selinux
#systemctl stop firewalld.service #systemctl disable firewalld.service #sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config #grep SELINUX=disabled /etc/selinux/config #setenforce 0
5.所有服务器关闭swap
# swapoff -a && sed -i '/swap/d' /etc/fstab
6.所有服务器配置系统路由参数,防止kubeadm报路由警告
#echo -e "net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nvm.swappiness = 0" >> /etc/sysctl.conf #sysctl -p
注:
[[email protected] soft]# sysctl -p sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录 sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录 [[email protected] soft]# modprobe bridge [[email protected] soft]# lsmod|grep bridge bridge 119562 0 stp 12976 1 bridge llc 14552 2 stp,bridge [[email protected] soft]# sysctl -p net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness = 0
7.操作系统版本:centos7.2
8.软件版本
kubernetes v1.9
docker:17.03
kubeadm:v1.9.0
kube-apiserver:v1.9.0
kube-controller-manager:v1.9.0
kube-scheduler:v1.9.0
k8s-dns-sidecar:1.14.7
k8s-dns-kube-dns:1.14.7
k8s-dns-dnsmasq-nanny:1.14.7
kube-proxy:v1.9.0
etcd:3.1.10
pause :3.0
flannel:v0.9.1
kubernetes-dashboard:v1.8.1
注意:采用kubeadm安装,kubeadm为kubernetes官方推荐的自动化部署工具,他将kubernetes的组件以pod的形式部署在master和node节点上,并自动完成证书认证等操作。
因为kubeadm默认要从google的镜像仓库下载镜像,但目前国内无法访问google镜像仓库,所以提前将镜像下好了,只需要将离线包的镜像导入到节点中就可以了。
1)所有服务器,下载相关包至/home/soft
链接:https://pan.baidu.com/s/1eUixGvo 密码:65yo
2)所有服务器,解压下载下来的离线包
#yum install -y bzip2
#tar -xjvf k8s_images.tar.bz2
3)所有服务器,安装docker-ce17.03(kubeadmv1.9最大支持docker-ce17.03)
安装依赖包
#yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm #yum install -y ftp://ftp.icm.edu.pl/vol/rzm6/linux-slc/centos/7.1.1503/cr/x86_64/Packages/libseccomp-2.2.1-1.el7.x86_64.rpm #yum install -y http://rpmfind.net/linux/centos/7.4.1708/os/x86_64/Packages/libtool-ltdl-2.4.2-22.el7_3.x86_64.rpm #cd k8s_images #rpm -ihv docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm #rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
注意:修改docker的镜像源为国内的daocloud的。
# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://3272dd08.m.daocloud.io
4)所有服务器,启动docker-ce
#systemctl start docker.service &&systemctl enable docker.service
5)所有服务器,导入镜像
docker load </home/soft/k8s_images/docker_images/k8s-dns-dnsmasq-nanny-amd64_v1.14.7.tar docker load </home/soft/k8s_images/docker_images/k8s-dns-kube-dns-amd64_1.14.7.tar docker load </home/soft/k8s_images/docker_images/k8s-dns-sidecar-amd64_1.14.7.tar docker load </home/soft/k8s_images/docker_images/kube-apiserver-amd64_v1.9.0.tar docker load </home/soft/k8s_images/docker_images/kube-controller-manager-amd64_v1.9.0.tar docker load </home/soft/k8s_images/docker_images/kube-scheduler-amd64_v1.9.0.tar docker load </home/soft/k8s_images/docker_images/flannel:v0.9.1-amd64.tar docker load </home/soft/k8s_images/docker_images/pause-amd64_3.0.tar docker load </home/soft/k8s_images/docker_images/kube-proxy-amd64_v1.9.0.tar docker load </home/soft/k8s_images/kubernetes-dashboard_v1.8.1.tar docker load </home/soft/k8s_images/docker_images/etcd-amd64_v3.1.10.tar
6)安装kubelet 、kubeadm、 kubectl包
rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm --nodeps --force yum localinstall -y socat-1.7.3.2-2.el7.x86_64.rpm yum localinstall -y kubelet-1.9.9-9.x86_64.rpm yum localinstall -y kubectl-1.9.0-0.x86_64.rpm yum localinstall -y kubeadm-1.9.0-0.x86_64.rpm
1)启动kubelete
#systemctl start kubelet && systemctl enable kubelet
2)开始初始化master
#kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16 --token-ttl=0
注:kubernetes默认支持多重网络插件如flannel、weave、calico,这里使用flanne,就必须要设置
--pod-network-cidr参数,10.244.0.0/16是kube-flannel.yml里面配置的默认网段,如果需要修改的话,需要把kubeadm init的--pod-network-cidr参数和后面的kube-flannel.yml里面修改成一样的网段就可以了。
--kubernetes-version 最好指定版本,否则会请求 https://storage.googleapis.com/kubernetes-release/release/stable-1.9.txt ,如果没"翻""墙",就超时报错
--token-ttl 默认的token有效期24小时, 设置为0表示永不过期
3)发现kubelet启动不了,报错了,查看日志/var/log/messages如下:
kubelet: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
解决方法:发现原来是kubelet默认的cgroup的driver和docker的不一样,docker默认的cgroupfs,kubelet默认为systemd,可以用docker info | grep cgroup查看当前docker驱动方式
编辑 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" 改为Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
重启reload
systemctl daemon-reload && systemctl restart kubelet 查看状态 #systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since 三 2018-04-11 15:11:22 CST; 22s ago Docs: http://kubernetes.io/docs/ Main PID: 15942 (kubelet) Memory: 40.3M CGroup: /system.slice/kubelet.service └─15942 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kub... 4月 11 15:11:32 master kubelet[15942]: E0411 15:11:32.415152 15942 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubel...refused 4月 11 15:11:32 master kubelet[15942]: E0411 15:11:32.416006 15942 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubel...refused 4月 11 15:11:32 master kubelet[15942]: E0411 15:11:32.426454 15942 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/confi...refused 4月 11 15:11:34 master kubelet[15942]: E0411 15:11:34.653755 15942 eviction_manager.go:238] eviction manager: unexpected...t found 4月 11 15:11:34 master kubelet[15942]: W0411 15:11:34.657127 15942 cni.go:171] Unable to update cni config: No networks ...i/net.d 4月 11 15:11:34 master kubelet[15942]: E0411 15:11:34.657315 15942 kubelet.go:2105] Container runtime network not ready:...ialized 4月 11 15:11:35 master kubelet[15942]: I0411 15:11:35.238311 15942 kubelet_node_status.go:273] Setting node annotation t.../detach 4月 11 15:11:35 master kubelet[15942]: I0411 15:11:35.240636 15942 kubelet_node_status.go:82] Attempting to register node master 4月 11 15:11:39 master kubelet[15942]: W0411 15:11:39.658588 15942 cni.go:171] Unable to update cni config: No networks ...i/net.d 4月 11 15:11:39 master kubelet[15942]: E0411 15:11:39.658802 15942 kubelet.go:2105] Container runtime network not ready:...ialized Hint: Some lines were ellipsized, use -l to show in full.
此时需要将环境reset一下,执行 #kubeadm reset 在重新执行 #kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16 --token-ttl=0
4)成功初始化如下:
[[email protected] k8s_images]# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16 --token-ttl=0 [init] Using Kubernetes version: v1.9.0 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [preflight] Starting the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.40] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 29.003450 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node master as master by adding a label and a taint [markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: d0c1ec.7d7a61a4e9ba83f8 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token d0c1ec.7d7a61a4e9ba83f8 192.168.2.40:6443 --discovery-token-ca-cert-hash sha256:7b38dad17cd1378446121952632d78d041dfcddc27b4663d011113a3b6326a65
将kubeadm join xxx保存下来,等下node节点需要使用,如果忘记了,可以在master上通过kubeadmin token list得到,也可以从新生成一个
当前生成token如下:
kubeadm join --token d0c1ec.7d7a61a4e9ba83f8 192.168.2.40:6443 --discovery-token-ca-cert-hash sha256:7b38dad17cd1378446121952632d78d041dfcddc27b4663d011113a3b6326a65
### 注意:kubeadm init 输出的 join 指令中 token 只有 24h 的有效期,如果过期后,需要重新生成,具体请参考:
# kubeadm token create --print-join-command
5)按照上面提示,此时root用户还不能使用kubelet控制集群需要,配置下环境变量
对于非root用户
#mkdir -p $HOME/.kube #cp -i /etc/kubernetes/admin.conf $HOME/.kube/config #chown $(id -u):$(id -g) $HOME/.kube/config
对于root用户
#export KUBECONFIG=/etc/kubernetes/admin.conf
也可以直接放到~/.bash_profile
#echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source一下环境变量
source ~/.bash_profile
6)kubectl version测试
[[email protected] k8s_images]# kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
6.安装网络,可以使用flannel、calico、weave、macvlan这里我们用flannel。
1)下载此文件
#wget https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
或直接使用离线包里面的
2)若要修改网段,修改配置文件kube-flannel.yml,需要kubeadm --pod-network-cidr=和这里同步,修改network项
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
3)执行加载网络
#kubectl create -f /home/soft/k8s_images/kube-flannel.yml
7.部署kubernetes-dashboard,kubernetes-dashboard是可选组件,因为,实在不好用,功能太弱了。 建议在部署master时一起把kubernetes-dashboard一起部署了,不然在node节点加入集群后,kubernetes-dashboard会被kube-scheduler调度node节点上,这样根kube-apiserver通信需要额外配置。
下载kubernetes-dashboard的配置文件或直接使用离线包里面的kubernetes-dashboard.yaml
1)创建kubernetes-dashboard
#kubectl create -f /home/soft/k8s_images/kubernetes-dashboard.yaml
2) 如果想修改端口,或外部可访问
# ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 32666 selector: k8s-app: kubernetes-dashboard
注意:32666是映射端口,跟docker run -d xxx:xxx差不多,映射出去即可。访问https://master_ip:32666
如果出现pod失败需要删除可使用以下命令,删除pod
kuberctl delete po -n kube-system <pod-name>
查看pod创建失败原因
# kubectl describe pod <pod-name> --namespace=kube-system
3)默认验证方式有kubeconfig和token,这里我们使用basicauth的方式进行apiserver的验证
创建/etc/kubernetes/manifests/pki/basic_auth_file 用于存放用户名和密码。basic_auth_file文件格式为user,password,userid
[[email protected] pki]# echo 'admin,admin,2' > /etc/kubernetes/pki/basic_auth_file
4)给kube-apiserver添加basic_auth验证
[[email protected] pki]# grep 'auth' /etc/kubernetes/manifests/kube-apiserver.yaml - --enable-bootstrap-token-auth=true - --authorization-mode=Node,RBAC
添加
- --basic_auth_file=/etc/kubernetes/pki/basic_auth_file
注意:!!!!如果这时直接kubectl apply -f xxxxxxxxx 执行更新kube-apiserver.yaml文件,会出现如下报错:
The connection to the server 192.168.2.40:6443 was refused - did you specify the right host or port?
解决方法:
在kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml之前,先执行systemctl daemon-reload再执行systemctl restart kubelet,确认是否重启是否成功
# kubectl get node
# kubectl get pod --all-namespaces
5)更新应用/etc/kubernetes/manifests/kube-apiserver.yaml
[[email protected] manifests]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml pod "kube-apiserver" created
6)k8s1.6后版本都采用RBAC授权模型。默认情况下cluster-admin是拥有全部权限的,将admin和cluster-admin角色进行clusterrolebinding绑定,这样admin就有cluster-admin的权限。
[[email protected] ~]# kubectl create clusterrolebinding login-on-dashboard-with-cluster-admin --clusterrole=cluster-admin --user=admin clusterrolebinding "login-on-dashboard-with-cluster-admin" created
检查是否正常获取到集群信息
# kubectl get clusterrolebinding/login-on-dashboard-with-cluster-admin -o yaml
7)查看所pod状态,已经都running
[[email protected] k8s_images]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-master 1/1 Running 0 9m kube-system kube-apiserver-master 1/1 Running 0 9m kube-system kube-controller-manager-master 1/1 Running 0 9m kube-system kube-dns-6f4fd4bdf-qj7s5 3/3 Running 0 37m kube-system kube-flannel-ds-4mvmz 1/1 Running 0 9m kube-system kube-proxy-67jq2 1/1 Running 0 37m kube-system kube-scheduler-master 1/1 Running 0 9m kube-system kubernetes-dashboard-58f5cb49c-xsqf5 1/1 Running 0 32s
8)测试连接
[[email protected] ~]# curl --insecure https://master:6443 -basic -u admin:admin { "paths": [ "/api", "/api/v1", "/apis", "/apis/", "/apis/admissionregistration.k8s.io", "/apis/admissionregistration.k8s.io/v1beta1", "/apis/apiextensions.k8s.io", "/apis/apiextensions.k8s.io/v1beta1", "/apis/apiregistration.k8s.io", "/apis/apiregistration.k8s.io/v1beta1", "/apis/apps", "/apis/apps/v1", "/apis/apps/v1beta1", "/apis/apps/v1beta2", "/apis/authentication.k8s.io", "/apis/authentication.k8s.io/v1", "/apis/authentication.k8s.io/v1beta1", "/apis/authorization.k8s.io", "/apis/authorization.k8s.io/v1", "/apis/authorization.k8s.io/v1beta1", "/apis/autoscaling", "/apis/autoscaling/v1", "/apis/autoscaling/v2beta1", "/apis/batch", "/apis/batch/v1", "/apis/batch/v1beta1", "/apis/certificates.k8s.io", "/apis/certificates.k8s.io/v1beta1", "/apis/events.k8s.io", "/apis/events.k8s.io/v1beta1", "/apis/extensions", "/apis/extensions/v1beta1", "/apis/networking.k8s.io", "/apis/networking.k8s.io/v1", "/apis/policy", "/apis/policy/v1beta1", "/apis/rbac.authorization.k8s.io", "/apis/rbac.authorization.k8s.io/v1", "/apis/rbac.authorization.k8s.io/v1beta1", "/apis/storage.k8s.io", "/apis/storage.k8s.io/v1", "/apis/storage.k8s.io/v1beta1", "/healthz", "/healthz/autoregister-completion", "/healthz/etcd", "/healthz/ping", "/healthz/poststarthook/apiservice-openapi-controller", "/healthz/poststarthook/apiservice-registration-controller", "/healthz/poststarthook/apiservice-status-available-controller", "/healthz/poststarthook/bootstrap-controller", "/healthz/poststarthook/ca-registration", "/healthz/poststarthook/generic-apiserver-start-informers", "/healthz/poststarthook/kube-apiserver-autoregistration", "/healthz/poststarthook/rbac/bootstrap-roles", "/healthz/poststarthook/start-apiextensions-controllers", "/healthz/poststarthook/start-apiextensions-informers", "/healthz/poststarthook/start-kube-aggregator-informers", "/healthz/poststarthook/start-kube-apiserver-informers", "/logs", "/metrics", "/swagger-2.0.0.json", "/swagger-2.0.0.pb-v1", "/swagger-2.0.0.pb-v1.gz", "/swagger.json", "/swaggerapi", "/ui", "/ui/", "/version" ]
9) Firefox访问测试(不建议用谷歌),因为是自签的证书,所以浏览器会报证书未受信任问题。
注:1.8版本的dashboard集成了运行命令(相当于执行了 kubectl exec -it etcd-vm1 -n kube-system /bin/sh ),使用起来还是挺方便的。
8.node节点操作(2个node节点服务器需操作)
1)node-1、node-2修改kubelet配置文件cgroup的driver由systemd改为cgroupfs
#vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Environment=”KUBELET_CGROUP_ARGS=–cgroup-driver=cgroupfs” #systemctl daemon-reload #systemctl enable kubelet&&systemctl restart kubelet
2) node-1、node-2节点加入集群,使用master上面的kubeadm后的kubeadm join --xxx 命令加入
#kubeadm join --token d0c1ec.7d7a61a4e9ba83f8 192.168.2.40:6443 --discovery-token-ca-cert-hash sha256:7b38dad17cd1378446121952632d78d041dfcddc27b4663d011113a3b6326a65
3)在master节点上检查一下
[[email protected] k8s_images]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 1h v1.9.0 node-1 Ready <none> 1m v1.9.0 node-2 Ready <none> 58s v1.9.0
4) 测试集群
在master节点上发起个创建应用请求,建个名为httpd-app的应用,镜像为httpd,有两个副本pod
[[email protected] k8s_images]# kubectl run httpd-app --image=httpd --replicas=2 deployment "httpd-app" created [[email protected] k8s_images]# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE httpd-app 2 2 2 0 1m [[email protected] k8s_images]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE httpd-app-5fbccd7c6c-5j5zb 1/1 Running 0 3m 10.224.2.2 node-2 httpd-app-5fbccd7c6c-rnkcm 1/1 Running 0 3m 10.224.1.2 node-1
因为创建的资源不是service所以不会调用kube-proxy,直接访问测试
#curl http://10.224.2.2
#curl http://10.224.1.2
删除应用httpd-app
[[email protected] ~]# kubectl delete deployment httpd-app [[email protected] ~]# kubectl get pods
至此kubernetes基本集群安装完成。
1、如果集群中主master进行重新初始化,并且之前已经加入过node节点,这时如果在原node节点执行kubeadm join --token xxxx时,会提示以下报错:
[[email protected] ~]# kubeadm join --token 6540e9.c83615e67d622766 192.168.2.40:6443 --discovery-token-ca-cert-hash sha256:34dd77dc3b800a93ffb5fc27b9d7d1e28118f7bb51b0b630afe1153ebcd4f4b8 [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [preflight] Some fatal errors occurred: [ERROR Port-10250]: Port 10250 is in use [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
解决方法:当集群重新初始化时,原有节点同样也要执行重置命令后,方可重新将节点加入集群
[[email protected] ~]# kubeadm reset [preflight] Running pre-flight checks. [reset] Stopping the kubelet service. [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Removing kubernetes-managed containers. [reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd. [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
重新加入成功
[[email protected] ~]# kubeadm join --token 6540e9.c83615e67d622766 192.168.2.40:6443 --discovery-token-ca-cert-hash sha256:34dd77dc3b800a93ffb5fc27b9d7d1e28118f7bb51b0b630afe1153ebcd4f4b8 [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [preflight] Starting the kubelet service [discovery] Trying to connect to API Server "192.168.2.40:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.2.40:6443" [discovery] Requesting info from "https://192.168.2.40:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.2.40:6443" [discovery] Successfully established connection with API Server "192.168.2.40:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
以上是关于手工离线部署k8s(v1.9)的主要内容,如果未能解决你的问题,请参考以下文章