K8s之Pod资源管理及创建Harbor私有镜像仓库(含镜像拉取操作,中途含排错)

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了K8s之Pod资源管理及创建Harbor私有镜像仓库(含镜像拉取操作,中途含排错)相关的知识,希望对你有一定的参考价值。

pod是k8s管理的最小单元

pod中有多个容器,现实生产环境中只有一个容器


特点:

1.最小部署单元
2.一组容器的集合
3.一个Pod中的容器共享网络命令空间
4.Pod是短暂的


Pod容器分类:

1:infrastructure container 基础容器(透明的过程,用户无感知)

维护整个Pod网络空间

node节点操作
`查看容器的网络`
[root@node1 ~]# cat /opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true --v=4 --hostname-override=192.168.18.148 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"    #提示网络组件镜像会从阿里云上进行下载

`每次创建Pod时候就会创建,与Pod对应的,对于用户是透明的`
[root@node1 ~]# docker ps
CONTAINER ID        IMAGE                                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
......此处省略多行
54d9e6ec3c02        registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0   "/pause"
#网络组件会被自动加载成一个组件提供出去
`结论:基础容器在创建时,一定会去创建一个网络容器`

2:initcontainers 初始化容器

pod在进行创建时一定会被执行当中的初始化initcontainers,在老版本中执行时不会区分前后顺序(在系统进行加载时PID号数字越小,优先级别越高,越先被启动),随着云平台的改进,启动模式改为主机形式,分隔出的初始化容器会被优先加载,在初始化容器加载完成之后后面的业务容器才能正常接着运行


3:container 业务容器(并行启动)

官方网站:https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

示例:

Init containers in use

This example defines a simple Pod that has two init containers. The first waits for myservice, and the second waits for mydb. Once both init containers complete, the Pod runs the app container from its spec section.

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: [‘sh‘, ‘-c‘, ‘echo The app is running! && sleep 3600‘]
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: [‘sh‘, ‘-c‘, ‘until nslookup myservice; do echo waiting for myservice; sleep 2; done;‘]
  - name: init-mydb
    image: busybox:1.28
    command: [‘sh‘, ‘-c‘, ‘until nslookup mydb; do echo waiting for mydb; sleep 2; done;‘]
镜像拉取策略(image PullPolicy)

IfNotPresent:默认值,镜像在宿主机上不存在时才拉取

Always:每次创建Pod都会重新拉取一次镜像

Never:Pod永远不会主动拉取这个镜像

官方网站:https://kubernetes.io/docs/concepts/containers/images

示例:

Verify by creating a pod that uses a private image, e.g.:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: private-image-test-1
spec:
  containers:
    - name: uses-private-image
      image: $PRIVATE_IMAGE_NAME
      imagePullPolicy: Always
      command: [ "echo", "SUCCESS" ]
EOF
master1上操作
[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
my-nginx-d55b94fd-kc2gl             1/1     Running   0          40h
my-nginx-d55b94fd-tkr42             1/1     Running   0          40h
nginx-6c94d899fd-8pf48              1/1     Running   0          2d15h
nginx-deployment-5477945587-f5dsm   1/1     Running   0          2d14h
nginx-deployment-5477945587-hmgd2   1/1     Running   0          2d14h
nginx-deployment-5477945587-pl2hn   1/1     Running   0          2d14h

[root@master1 ~]# kubectl edit deployment/my-nginx
......此处省略多行
    spec:
      containers:
      - image: nginx:1.15.4
        imagePullPolicy: IfNotPresent
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

[root@master1 ~]# cd demo/
[root@master1 demo]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: nginx
      image: nginx
      imagePullPolicy: Always
      command: [ "echo", "SUCCESS" ]
[root@master1 demo]# kubectl create -f pod1.yaml    #进行创建
pod/mypod created
此时会出现CrashLoopBackOff创建之后又关闭的状态提示
`失败的状态的原因是因为命令启动冲突`
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: nginx
      image: nginx:1.14     #同时更改一下版本nginx:1.14
      imagePullPolicy: Always
#删除最后一行的command: [ "echo", "SUCCESS" ]语句

`删除原有的资源`
[root@master1 demo]# kubectl delete -f pod1.yaml
pod "mypod" deleted

`更新资源`
[root@master1 demo]# kubectl apply -f pod1.yaml
pod/mypod created
[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
mypod                               1/1     Running   0          3m26s

`查看分配节点`
[root@master1 demo]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE           NOMINATED NODE
mypod         1/1     Running   0          4m45s   172.17.40.5   192.168.18.145   <none>
#此时172.17.40.5段,对应的是node2节点的192.168.18.145地址

`到node2上查看指定的应用是否部署到指定节点上`
[root@node2 ~]# curl -I 172.17.40.5
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Sat, 15 Feb 2020 04:11:53 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 04 Dec 2018 14:44:49 GMT
Connection: keep-alive
ETag: "5c0692e1-264"
Accept-Ranges: bytes

搭建Harbor私有仓库

此时再开启一台新的虚拟机:CentOS 7-2 192.168.18.134(可以将网卡设置为静态IP)

`部署docker引擎`
[root@harbor ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y
[root@harbor ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@harbor ~]# yum install -y docker-ce
[root@harbor ~]# systemctl stop firewalld.service
[root@harbor ~]# setenforce 0
[root@harbor ~]# systemctl start docker.service
[root@harbor ~]# systemctl enable docker.service

`检查相关进程开启情况`
[root@harbor ~]# ps aux | grep docker
root       4913  0.8  3.6 565612 68884 ?        Ssl  12:23   0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root       5095  0.0  0.0 112676   984 pts/1    R+   12:23   0:00 grep --color=auto docker

`镜像加速服务`
[root@harbor ~]# tee /etc/docker/daemon.json <<-‘EOF‘
{
  "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"]
}
EOF
[root@harbor ~]# systemctl daemon-reload
[root@harbor ~]# systemctl restart docker

`网络优化部分`
[root@harbor ~]# echo ‘net.ipv4.ip_forward=1‘ >> /etc/sysctl.conf
[root@harbor ~]# service network restart
Restarting network (via systemctl):                        [  确定  ]
[root@harbor ~]# systemctl restart docker
----------

[root@harbor ~]# mkdir /aaa
[root@harbor ~]# mount.cifs //192.168.0.105/rpm /aaa
Password for root@//192.168.0.105/rpm:
[root@harbor ~]# cd /aaa/docker/
[root@harbor docker]# cp docker-compose /usr/local/bin/
[root@harbor docker]# cd /usr/local/bin/
[root@harbor bin]# ls
docker-compose
[root@harbor bin]# docker-compose -v
docker-compose version 1.21.1, build 5a3f1a3
[root@harbor bin]# cd /aaa/docker/
[root@harbor docker]# tar zxvf harbor-offline-installer-v1.2.2.tgz -C /usr/local/
[root@harbor docker]# cd /usr/local/harbor/
[root@harbor harbor]# ls
common                     docker-compose.yml     harbor.v1.2.2.tar.gz  NOTICE
docker-compose.clair.yml   harbor_1_1_0_template  install.sh            prepare
docker-compose.notary.yml  harbor.cfg             LICENSE               upgrade

`配置Harbor参数文件`
[root@harbor harbor]# vim harbor.cfg
5 hostname = 192.168.18.134     #5行改为自己本机的IP地址
59 harbor_admin_password = Harbor12345      #此行为默认账号和密码不要忘记,登陆时要用
#修改完成后按Esc退出插入模式,输入:wq保存退出
[root@harbor harbor]# ./install.sh
......此处省略多行
Creating harbor-log ... done
Creating harbor-adminserver ... done
Creating harbor-db          ... done
Creating registry           ... done
Creating harbor-ui          ... done
Creating nginx              ... done
Creating harbor-jobservice  ... done
? ----Harbor has been installed and started successfully.----
Now you should be able to visit the admin portal at http://192.168.18.134.
For more details, please visit https://github.com/vmware/harbor .

第一步:登录Harbor私有仓库

在宿主机浏览器地址栏中输入:192.168.18.134,输入默认的账户admin,密码Harbor12345,就可以点击登录

技术图片

第二步:新建项目并设为私有

在项目界面点击"+项目"添加新项目,输入项目名称,点击创建,然后点击新项目左侧的三个小点,将项目设为私有

技术图片

技术图片


两个node节点配置连接私有仓库(注意后面的逗号要添加)

`node2节点`
[root@node2 ~]# vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"],     #末尾要有,
  "insecure-registries":["192.168.18.134"]                          #添加这行
}
[root@node2 ~]# systemctl restart docker

`node2节点`
[root@node1 ~]# vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"],     #末尾要有,
  "insecure-registries":["192.168.18.134"]                          #添加这行
}
[root@node1 ~]# systemctl restart docker

第三步:节点上登录harbor私有仓库

`node2节点:`
[root@node2 ~]# docker login 192.168.18.134
Username: admin     #输入账户admin
Password:           #输入密码:Harbor12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded     #此时成功登录

`下载tomcat镜像并打标签推送:``
[root@node2 ~]# docker pull tomcat
......此处省略多行
Status: Downloaded newer image for tomcat:latest
docker.io/library/tomcat:latest
[root@node2 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
tomcat                                                            latest              aeea3708743f        3 days ago          529MB
[root@node2 ~]# docker tag tomcat 192.168.18.134/project/tomcat     #打标签的过程
[root@node2 ~]# docker push 192.168.18.134/project/tomcat           #上传镜像
此时在harbor私仓界面就能看到推送上去的tomcat镜像

技术图片


问题:如果我们想使用另一个节点node1去拉取私仓中的tomcar镜像就会出现error报错,提示被拒绝(也就是需要登陆)

[root@node1 ~]# docker pull 192.168.18.134/project/tomcat
Using default tag: latest
Error response from daemon: pull access denied for 192.168.18.134/project/tomcat, repository does not exist or may require ‘docker login‘: denied: requested access to the resource is denied       #提示出错,缺少仓库的凭据

`node1节点下载tomcat镜像`
[root@node1 ~]# docker pull tomcat:8.0.52
[root@node1 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
tomcat                                                            8.0.52              b4b762737ed4        19 months ago       356MB

第四步:master1上操作

[root@master1 demo]# vim tomcat01.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-tomcat
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-tomcat
    spec:
      containers:
      - name: my-tomcat
        image: docker.io/tomcat:8.0.52
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-tomcat
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
  selector:
    app: my-tomcat

`创建`
[root@master1 demo]# kubectl create -f tomcat01.yaml
deployment.extensions/my-tomcat created
service/my-tomcat created
`查看资源`
[root@master1 demo]# kubectl get pods,deploy,svc
NAME                                    READY   STATUS    RESTARTS   AGE
pod/my-nginx-d55b94fd-kc2gl             1/1     Running   1          2d
pod/my-nginx-d55b94fd-tkr42             1/1     Running   1          2d
`pod/my-tomcat-57667b9d9-8bkns`         1/1     Running   0          84s
`pod/my-tomcat-57667b9d9-kcddv`         1/1     Running   0          84s
pod/mypod                               1/1     Running   1          8h
pod/nginx-6c94d899fd-8pf48              1/1     Running   1          3d
pod/nginx-deployment-5477945587-f5dsm   1/1     Running   1          2d23h
pod/nginx-deployment-5477945587-hmgd2   1/1     Running   1          2d23h
pod/nginx-deployment-5477945587-pl2hn   1/1     Running   1          2d23h

NAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/my-nginx           2         2         2            2           2d
`deployment.extensions/my-tomcat`        2         2         2            2           84s
deployment.extensions/nginx              1         1         1            1           8d
deployment.extensions/nginx-deployment   3         3         3            3           2d23h

NAME                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
service/kubernetes         ClusterIP   10.0.0.1     <none>        443/TCP          10d
service/my-nginx-service   NodePort    10.0.0.210   <none>        80:40377/TCP     2d
`service/my-tomcat          NodePort    10.0.0.86    <none>        8080:41860/TCP   84s`
service/nginx-service      NodePort    10.0.0.242   <none>        80:40422/TCP     3d10h
#内部端口8080,对外端口41860

[root@master1 demo]# kubectl get ep
NAME               ENDPOINTS                                 AGE
kubernetes         192.168.18.128:6443,192.168.18.132:6443   10d
my-nginx-service   172.17.32.4:80,172.17.40.3:80             2d
`my-tomcat          172.17.32.6:8080,172.17.40.6:8080         5m29s`
nginx-service      172.17.40.5:80                            3d10h
#此时my-tomcat被分配到了后面两个节点上去
验证:在宿主机浏览器中输入192.168.18.148:41860和192.168.18.145:41860这两个节点地址加对外暴露端口号,查看是否都可以访问tomcat的主页

技术图片

技术图片

`验证可以成功访问之后我们先把资源删除,后面使用私有仓库中的镜像进行创建`
[root@master1 demo]# kubectl delete -f tomcat01.yaml
deployment.extensions "my-tomcat" deleted
service "my-tomcat" deleted

问题处理:

`如果遇到处于Terminating状态的无法删除的资源`
[root@localhost demo]# kubectl get pods
NAME                              READY   STATUS        RESTARTS   AGE
my-tomcat-57667b9d9-8bkns         1/1     `Terminating`   0          84s
my-tomcat-57667b9d9-kcddv         1/1     `Terminating`   0          84s

#这种情况下可以使用强制删除命令
`格式:kubectl delete pod [pod name] --force --grace-period=0 -n [namespace]`

[root@localhost demo]# kubectl delete pod my-tomcat-57667b9d9-8bkns --force --grace-period=0 -n default
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "my-tomcat-57667b9d9-8bkns" force deleted

[root@localhost demo]# kubectl delete pod my-tomcat-57667b9d9-kcddv --force --grace-period=0 -n default
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "my-tomcat-57667b9d9-kcddv" force deleted

[root@localhost demo]# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
pod/mypod                               1/1     Running   1          8h
pod/nginx-6c94d899fd-8pf48              1/1     Running   1          3d
pod/nginx-deployment-5477945587-f5dsm   1/1     Running   1          2d23h
pod/nginx-deployment-5477945587-hmgd2   1/1     Running   1          2d23h
pod/nginx-deployment-5477945587-pl2hn   1/1     Running   1          2d23h

第五步:node1上操作(之前登陆过Harbor仓库的节点)

我们需要先删除我们之前上传到私有仓库的额project/tomcat镜像

技术图片

node2中之前打标签的镜像也需要删除:
[root@node2 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
192.168.18.134/project/tomcat                                     latest              aeea3708743f        3 days ago          529MB

[root@node2 ~]# docker rmi 192.168.18.134/project/tomcat
Untagged: 192.168.18.134/project/tomcat:latest
Untagged: 192.168.18.134/project/tomcat@sha256:8ffa1b72bf611ac305523ed5bd6329afd051c7211fbe5f0b5c46ea5fb1adba46

`镜像打标签`
[root@node2 ~]# docker tag tomcat:8.0.52 192.168.18.134/project/tomcat
`上传镜像到Harbor`
[root@node2 ~]# docker push 192.168.18.134/project/tomcat
#此时我们就可以在私有仓库中看到新上传的镜像了

`查看登陆凭据`
[root@node2 ~]# cat .docker/config.json
{
        "auths": {
                "192.168.18.134": {     #访问的IP地址
                        "auth": "YWRtaW46SGFyYm9yMTIzNDU="      #验证
                }
        },
        "HttpHeaders": {                #头部信息
                "User-Agent": "Docker-Client/19.03.5 (linux)"
        }
`生成非换行形式的验证码`
[root@node2 ~]# cat .docker/config.json | base64 -w 0
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE4LjEzNCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy41IChsaW51eCkiCgl9Cn0=   

特别注意:此时下载次数为0,一会我们使用私有仓库中的镜像进行资源的创建,那么拉取的过程必定会下载镜像,应当数值会有变化


第六步:master1中创建安全组件的yaml文件

[root@master1 demo]# vim registry-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: registry-pull-secret
data:
  .dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE4LjEzNCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy41IChsaW51eCkiCgl9Cn0=
type: kubernetes.io/dockerconfigjson

`创建secret资源`
[root@master1 demo]# kubectl create -f registry-pull-secret.yaml
secret/registry-pull-secret created
`查看secret资源`
[root@master1 demo]# kubectl get secret
NAME                   TYPE                                  DATA   AGE
default-token-pbr9p    kubernetes.io/service-account-token   3      10d
`registry-pull-secret   kubernetes.io/dockerconfigjson        1      25s`

[root@master1 demo]# vim tomcat01.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-tomcat
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-tomcat
    spec:
      imagePullSecrets:             #证书拉取的凭据
      - name: registry-pull-secret  #名称
      containers:
      - name: my-tomcat
        image: 192.168.18.134/project/tomcat    #镜像的下载位置做此修改
        ports:
        - containerPort: 80
......以下省略多行
#修改完成后按Esc退出插入模式,输入:wq保存退出
`创建tomcat01资源`
[root@master1 demo]# kubectl create -f tomcat01.yaml
deployment.extensions/my-tomcat created
service/my-tomcat created

[root@master1 demo]# kubectl get pods,deploy,svc,ep
NAME                                    READY   STATUS    RESTARTS   AGE
pod/my-nginx-d55b94fd-kc2gl             1/1     Running   1          2d1h
pod/my-nginx-d55b94fd-tkr42             1/1     Running   1          2d1h
`pod/my-tomcat-7c5b6db486-bzjlv`        1/1     Running   0          56s
`pod/my-tomcat-7c5b6db486-kw8m4`        1/1     Running   0          56s
pod/mypod                               1/1     Running   1          9h
pod/nginx-6c94d899fd-8pf48              1/1     Running   1          3d1h
pod/nginx-deployment-5477945587-f5dsm   1/1     Running   1          3d
pod/nginx-deployment-5477945587-hmgd2   1/1     Running   1          3d
pod/nginx-deployment-5477945587-pl2hn   1/1     Running   1          3d

NAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/my-nginx           2         2         2            2          2d1h
`deployment.extensions/my-tomcat`        2         2         2            2           56s
deployment.extensions/nginx              1         1         1            1           8d
deployment.extensions/nginx-deployment   3         3         3            3           3d

NAME                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
service/kubernetes         ClusterIP   10.0.0.1     <none>        443/TCP          10d
service/my-nginx-service   NodePort    10.0.0.210   <none>        80:40377/TCP     2d1h
`service/my-tomcat`        NodePort    10.0.0.235   <none>        8080:43654/TCP   56s
service/nginx-service      NodePort    10.0.0.242   <none>        80:40422/TCP     3d11h
#对外端口为43654
NAME                         ENDPOINTS                                 AGE
endpoints/kubernetes         192.168.18.128:6443,192.168.18.132:6443   10d
endpoints/my-nginx-service   172.17.32.4:80,172.17.40.3:80             2d1h
`endpoints/my-tomcat`        172.17.32.6:8080,172.17.40.6:8080         56s
endpoints/nginx-service      172.17.40.5:80                            3d11h

接下来我们需要验证的就是资源加载没有任何问题的情况下,镜像资源是否来自我们的Harbor私有仓库呢?

这里就需要关注我们私有仓库中镜像的下载数了

技术图片

结果:这时显示下载数由之前的0变为2,这就说明我们创建的两个资源镜像是从私有仓库中下载的!

我们再使用宿主机的浏览器验证192.168.18.148:43654和192.168.18.145:43654这两个节点地址还是可以访问tomcat的主页

技术图片

技术图片


以上实验实现了Harbor私有仓库搭配创建Pod资源!

以上是关于K8s之Pod资源管理及创建Harbor私有镜像仓库(含镜像拉取操作,中途含排错)的主要内容,如果未能解决你的问题,请参考以下文章

Harbor单点仓库部署

Harbor镜像拉取凭证配置

Harbor认证:K8S无法正常拉取harbor镜像

Kubernetes(k8s)之pod管理

Docker搭建私有仓库之Harbor

k8s之安装私有仓库Harbor以及从harbor推送/拉取镜像实战