第146天学习打卡(Kubernetes DaemonSet k8s集群组件安装遇到的错误)

Posted doudoutj

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了第146天学习打卡(Kubernetes DaemonSet k8s集群组件安装遇到的错误)相关的知识,希望对你有一定的参考价值。

安装Docker

 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O  /etc/yum.repos.d/docker-ce.repo
  
  yum -y install docker-ce-18.06.1.ce-3.el7 #安装docker
 
 systemctl enable docker && systemctl start docker#设置开机启动
  docker --version
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

systemctl restart docker #重启docker

添加阿里云YUM软件源

 cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm kubelet kubectl

 yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet  #开机启动

部署Kubernates Master

只在Master节点进行操作

 kubeadm init \\
--apiserver-advertise-address=47.109.20.97 \\ #当前节点iP
--image-repository registry.aliyuncs.com/google_containers \\
--kubernetes-version v1.18.0 \\
--service-cidr=10.96.0.0/12 \\ #这个ip没有特别要求 只要和当前Ip不冲突就行
--pod-network-cidr=10.244.0.0/16#这个ip没有特别要求 只要和当前Ip不冲突就行

 kubeadm init \\
--apiserver-advertise-address=47.109.20.97 \\
--image-repository registry.aliyuncs.com/google_containers \\
--kubernetes-version v1.18.0 \\
--service-cidr=10.96.0.0/12 \\
--pod-network-cidr=10.244.0.0/16
kubectl delete pods --grace-period=0 --force e60aa31cb0775eaa51e938ac66011481
    
touch /etc/systemd/system/docker.service.d/http-proxy.conf
echo -e '[Service]\\nEnvironment="HTTP_PROXY=http://192.168.1.100:1080"' > /etc/systemd/system/docker.service.d/http-proxy.conf
touch /etc/systemd/system/docker.service.d/https-proxy.conf
echo -e '[Service]\\nEnvironment="HTTPS_PROXY=http://192.168.1.100:1080"' > /etc/systemd/system/docker.service.d/http-proxy.conf

systemctl daemon-reload && systemctl restart docker
     
docker info | grep -i proxy

 kubeadm init

遇到的错误

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.18.19: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.18.19: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.18.19: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.18.19: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.7: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

#查看所需要的的镜像与版本
[root@iZ2vccmt2pk4prlsi1i6avZ ~]# kubeadm config images list
I0603 16:53:53.636567   22619 version.go:252] remote version is much newer: v1.21.1; falling back to: stable-1.18
W0603 16:53:54.704686   22619 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.0 
k8s.gcr.io/kube-controller-manager:v1.18.0 
k8s.gcr.io/kube-scheduler:v1.18.0 
k8s.gcr.io/kube-proxy:v1.18.0 
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7


Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.18.0 
docker pull mirrorgooglecontainers/kube-controller-amd64:v1.18.0 
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.18.0 
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.12.2v1.18.0 
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.18.0 
docker pull mirrorgooglecontainers/pause:3.2
docker pull mirrorgooglecontainers/etcd-amd64:3.4.3-0
docker pull mirrorgooglecontainers/coredns:1.6.7
docker pull mirrorgooglecontainers/coredns-amd64:1.6.7
docker pull coredns/coredns:1.6.7



docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.18.0
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.12.2 k8s.gcr.io/kube-scheduler:v1.18.0
docker tag mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.4.3-0
docker tag coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.6.7
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.2
#编写执行脚本
vim pull_k8s_images.sh

set -o errexit
set -o nounset
set -o pipefail

KUBE_VERSION=v1.18.19
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.3-0
DNS_VERSION=1.6.7

GCR_URL=k8s.gcr.io
# mirrorgooglecontainers/alleyj
DOCKERHUB_URL=mirrorgooglecontainers

images=(
kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${DNS_VERSION}
)

for imageName in ${images[@]} ; do
  docker pull $DOCKERHUB_URL/$imageName
  docker tag $DOCKERHUB_URL/$imageName $GCR_URL/$imageName
  docker rmi $DOCKERHUB_URL/$imageName
done
# 授予执行权限
chmod +x ./pull_k8s_images.sh
# 执行
./pull_k8s_images.sh

kubeadm init --kubernetes-version=v1.18.19

kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.19

遇到的错误

error execution phase kubeconfig/admin: a kubeconfig file "/etc/kubernetes/admin.conf" exists already but has got the wrong API Server URL

遇到的错误

Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message
kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.0
kubeadm init --kubernetes-version=v1.18.0 --pod-network-cidr=10.244.0.0/16


mkdir -p /etc/cni/net.d
 vi 10-flannel.conflist

{
  "name": "cbr0",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}


systemctl daemon-reload

systemctl restart kubelet

遇到的错误

	[ERROR Port-10250]: Port 10250 is in use

解决:

# kubeadm reset

#继续执行
kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.21.1-0

参考:Error ImagePull failed to pull image k8s.gcr.io - 李帆1998 - 博客园 (cnblogs.com)

遇到的错误

[root@k8smaster ~]# ./pull_k8s_images.sh
Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
[root@k8smaster ~]# vim daemon.json


{
 
  "registry-mirrors": [" https://pee6w651.mirror.aliyuncs.com"]
 
}


[root@k8smaster ~]# systemctl daemon-reload 
[root@k8smaster ~]# systemctl restart docker 




遇到的问题:

[root@k8smaster ~]#  kubeadm init --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.21.1
[preflight] Running pre-flight checks
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
	[WARNING KubeletVersion]: Kubelet version "1.18.0" is lower than kubeadm can support. Please upgrade kubelet
	[WARNING Port-10250]: Port 10250 is in use
[preflight] Pulling images required for setting up a Kubernetes cluster

还是没有运行起来,还是在报错

使用kubectl工具:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

容器编排

  • Docker通过“镜像”机制极富创造性地解决了应用程序打包的根本性难题,它推动了容器技术的快速普及即生成落地
  • 容器本身仅提供了托管运行应用的底层逻辑,而容器编排(Orchestration)才是真正产生价值的位置所在。

简单来说,容器编排是指容器应用的自动布局,协同及管理,它主要负责完成以下具体内容:

  • 服务发现
  • 负载均衡
  • 自动扩缩容
  • 零宕机
  • 配置与存储管理

以上是关于第146天学习打卡(Kubernetes DaemonSet k8s集群组件安装遇到的错误)的主要内容,如果未能解决你的问题,请参考以下文章

第156天学习打卡(Kubernetes 搭建监控平台 高可用集群部署 )

第152天学习打卡(Kubernetes 集群安全机制)

第149天学习打卡(Kubernetes 部署nginx 部署Dashboard)

第151天学习打卡(Kubernetes 集群YAML文件详解 Pod Controller)

第153天学习打卡(Kubernetes Ingress Helm)

第154天学习打卡(Kubernetes 使用Helm快速部署应用, 如何自己创建Chart)