markdown 云原生认证Kubernetes管理员(CKA)
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了markdown 云原生认证Kubernetes管理员(CKA)相关的知识,希望对你有一定的参考价值。
# Exploring the Kubernetes Cluster via the Command Line
We have been given a Kubernetes cluster to inspect. In order to better
understand the layout and the structure of this cluster, we must run the
appropriate commands.
Log in to the Kube Master server using the credentials on the lab page
(either in your local terminal, using the Instant Terminal feature,
or using the public IP), and work through the objectives listed.
## List all the nodes in the cluster.
Use the following command to list the nodes in your cluster:
```bash
kubectl get nodes
```
We should see three nodes: one master and two workers.
## List all the pods in all namespaces.
Use the following command to list the pods in all namespaces:
```bash
kubectl get pods --all-namespaces
```
## List all the namespaces in the cluster.
Use the following command to list all the namespaces in the cluster:
```bash
kubectl get namespaces
```
Here, we should see four namespaces: default, kube-public, kube-system, and web.
## Check to see if there are any pods running in the default namespace.
Use the following command to list the pods in the default namespace:
```bash
kubectl get pods
```
We should see that there aren't any pods in the default namespace.
## Find the IP address of the API server running on the master node.
Use the following command to find the IP address of the API server:
```bash
kubectl get pods --all-namespaces -o wide
```
## See if there are any deployments in this cluster.
Use the following command to check for any deployments in the cluster:
```bash
kubectl get deployments
```
We should see there aren't any deployments in the cluster.
## Find the label applied to the etcd pod on the master node.
Use the following command to view the label on the etcd pod:
```bash
kubectl get pods --all-namespaces --show-labels -o wide
```
# Installing and Testing the Components of a Kubernetes Cluster
We have been given three nodes, in which we must install the components
necessary to build a running Kubernetes cluster. Once the cluster has been built
and we have verified all nodes are in the ready status, we need to start testing
deployments, pods, services, and port forwarding, as well as executing commands from a pod.
Log in to all three nodes (the controller/master and workers) using the
credentials on the lab page (either in your local terminal, using the Instant
Terminal feature, or using the public IPs), and work through the objectives
listed.
## Get the Docker gpg, and add it to your repository.
1. In all three terminals, run the following command to get the Docker gpg key:
```bash
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
```
2. In all three terminals, add it to your repository:
```bash
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
```
## Get the Kubernetes gpg key, and add it to your repository.
1. In all three terminals, run the following command to get the Kubernetes gpg key:
```bash
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
```
2. In all three terminals, add it to your repository:
```bash
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
```
3. In all three terminals, update the packages:
```bash
sudo apt-get update -y
```
## Install Docker, kubelet, kubeadm, and kubectl.
1. In all three terminals, run the following command to install Docker, kubelet,
kubeadm, and kubectl:
```bash
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.13.5-00
kubeadm=1.13.5-00 kubectl=1.13.5-00
```
2. Initialize the Kubernetes cluster.
```bash
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
```
## Set up local kubeconfig.
1. In the master node terminal, run the following commands to set up local kubeconfig:
```bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
# Upgrading the Kubernetes Cluster Using kubeadm
We have been given a three-node cluster that is in need of an upgrade. In this
hands-on lab, we must perform the upgrade to all of the cluster components,
including kubeadm, kube-controller-manager, kube-scheduler, kubeadm, and kubectl.
Log in to all three servers using the credentials on the lab page
(either in your local terminal, using the Instant Terminal feature, or using
the public IPs), and work through the objectives listed.
## Get the latest version of kubeadm.
In the terminal where you're logged in to the Master node, use the following
commands to create a variable and get the latest version of kubeadm:
```bash
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt)
export VERSION=“v1.13.5"
export ARCH=amd64
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > kubeadm
```
## Install kubeadm and verify it has been installed correctly.
Still in the Master node terminal, run the following commands to install
kubeadm and verify the version:
```bash
sudo install -o root -g root -m 0755 ./kubeadm /usr/bin/kubeadm
sudo kubeadm version
```
## Plan the upgrade in order to check for errors.
Still in the Master node terminal, use the following command to plan the upgrade:
```bash
sudo kubeadm upgrade plan
```
## Perform the upgrade of the kube-scheduler and kube-controller-manager.
Still in the Master node terminal, use this command to apply the upgrade
(also in the output of upgrade plan):
```bash
sudo kubeadm upgrade apply v1.13.5
```
## Get the latest version of kubelet.
Now, in each node terminal, use the following commands to get the latest
version of kubelet on each node:
```bash
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt)
export VERSION=v1.13.5
export ARCH=amd64
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubelet > kubelet
```
## Install kubelet on each node and restart the kubelet service.
In each node terminal, use these commands to install kubelet and restart the
kubelet service:
```bash
sudo install -o root -g root -m 0755 ./kubelet /usr/bin/kubelet
sudo systemctl restart kubelet.service
```
## Verify the kubelet was installed correctly.
Use the following command to verify the kubelet was installed correctly:
```bash
kubectl get nodes
```
## Get the latest version of kubectl.
In each node terminal, use the following command to get the latest version of
kubectl:
```bash
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubectl > kubectl
```
## Install the latest version of kubectl.
In each node terminal, use the following command to install the latest version
of kubectl:
```bash
sudo install -o root -g root -m 0755 ./kubectl /usr/bin/kubectl
```
# Creating a Service and Discovering DNS Names in Kubernetes
We have been given a three-node cluster. Within that cluster, we must perform
the following tasks in order to create a service and resolve the DNS names for
that service. We will create the necessary Kubernetes resources in order to
perform this DNS query.
To adequately complete this hands-on lab, we must have a working deployment,
a working service, and be able to record the DNS name of the service within our
Kubernetes cluster.
Log in to the Kube Master server using the credentials on the lab page
(either in your local terminal, using the Instant Terminal feature, or
using the public IP), and work through the objectives listed.
## Create an nginx deployment, and verify it was successful.
1. Use this command to create an nginx deployment:
```bash
kubectl run nginx --image=nginx
```
2. Use this command to verify deployment was successful:
```bash
kubectl get deployments
```
## Create a service, and verify the service was successful.
1. Use this command to create a service:
```bash
kubectl expose deployment nginx --port 80 --type NodePort
```
2. Use this command to verify the service was created:
```bash
kubectl get services
````
## Create a pod that will allow you to query DNS, and verify it’s been created.
1. Using an editor of your choice (e.g., Vim and the command vim busybox.yaml),
enter the following YAML to create the busybox pod spec:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- image: busybox:1.28.4
command:
- sleep
- "3600"
name: busybox
restartPolicy: Always
```
2. Use the following command to create the busybox pod:
```bash
kubectl create -f busybox.yaml
```
3. Use the following command to verify the pod was created successfully:
```bash
kubectl get pods
```
## Perform a DNS query to the service.
1. Use the following command to query the DNS name of the nginx service:
```bash
kubectl exec busybox -- nslookup nginx
```
## Record the DNS name.
1. Record the name of:
```bash
<service-name>;.default.svc.cluster.local
```
## Scheduling Pods with Taints and Tolerations in Kubernetes
In this hands-on lab, we have been given a three-node cluster. Within that
cluster, we must perform the following tasks to taint the production node in
order to repel work. We will create the necessary taint to properly label one of the nodes “prod.” Then, we will deploy two pods — one to each environment. One pod spec will contain the toleration for the taint.
Log in to the Kube Master server using the credentials on the lab page
(either in your local terminal, using the Instant Terminal feature, or using
the public IP), and work through the objectives listed.
## Taint one of the worker nodes to repel work.
1. In the terminal, run the following command:
```bash
kubectl get nodes
````
This will list out the nodes, which we'll need for the following tasks.
2. Use the following command to taint the node:
```bash
kubectl taint node <node_name> node-type=prod:NoSchedule
```
Here, <node_name> will be one of the worker node names you saw as a result of
kubectl get nodes
## Schedule a pod to the dev environment
1. Using an editor of your choice (e.g., Vim and the command vim dev-pod.yaml),
enter the following YAML to specify a pod that will be scheduled to the dev
environment:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dev-pod
labels:
app: busybox
spec:
containers:
- name: dev
image: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
````
2. Use the following command to create the pod:
```bash
kubectl create -f dev-pod.yaml
```
## Schedule a pod to the prod environment
1. Use the following YAML (called prod-deployment.yaml) to specify a pod that
will be scheduled to the prod environment:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: prod
spec:
replicas: 1
selector:
matchLabels:
app: prod
template:
metadata:
labels:
app: prod
spec:
containers:
- args:
- sleep
- "3600"
image: busybox
name: main
tolerations:
- key: node-type
operator: Equal
value: prod
effect: NoSchedule
```
2. Use the following command to create the pod:
```bash
kubectl create -f prod-deployment.yaml
```
## Verify each pod has been scheduled to the correct environment
1. Use the following command to verify the pods have been scheduled:
```bash
kubectl get pods -o wide
````
# Performing a Rolling Update of an Application in Kubernetes
In this hands-on lab, we have been given a three-node cluster. Within that
cluster, we must deploy our application and then successfully update the
application to a new version without causing any downtime.
Log in to the Kube Master server using the credentials on the lab page
(either in your local terminal, using the Instant Terminal feature, or using
the public IPs), and work through the objectives listed.
## Create and roll out version 1 of the application, and verify a successful deployment.
1. Use the following YAML named kubeserve-deployment.yaml to create your
deployment:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubeserve
spec:
replicas: 3
selector:
matchLabels:
app: kubeserve
template:
metadata:
name: kubeserve
labels:
app: kubeserve
spec:
containers:
- image: linuxacademycontent/kubeserve:v1
name: app
```
2. Create the deployment:
```bash
kubectl apply -f kubeserve-deployment.yaml --record
```
3. Verify the deployment was successful:
```bash
kubectl rollout status deployments kubeserve
```
4. Verify the app is at the correct version:
```bash
kubectl describe deployment kubeserve
```
## Scale up the application to create high availability.
1. Scale up your application to five replicas:
```bash
kubectl scale deployment kubeserve --replicas=5
```
2. Verify the additional replicas have been created:
```bash
kubectl get pods
```
## Create a service, so users can access the application.
1. Create a service for your deployment:
```bash
kubectl expose deployment kubeserve --port 80 --target-port 80 --type NodePort
```
2. Verify the service is present, and collect the cluster IP:
```bash
kubectl get services
```
3. Verify the service is responding:
```bash
curl http://<ip-address-of-the-service>
```
## Perform a rolling update to version 2 of the application, and verify its success.
1. Start another terminal session to the same Kube Master server. There, use
this curl loop command to see the version change as you perform the rolling
update:
```bash
while true; do curl http://<ip-address-of-the-service>; done
```
2. Perform the update in the original terminal session (while the curl loop is
running in the new terminal session):
```bash
kubectl set image deployments/kubeserve app=linuxacademycontent/kubeserve:v2 --v 6
```
3. View the additional ReplicaSet created during the update:
```bash
kubectl get replicasets
```
4. Verify all pods are up and running:
```bash
kubectl get pods
```
5. View the rollout history:
```bash
kubectl rollout history deployment kubeserve
```
# Creating Persistent Storage for Pods in Kubernetes
In this hands-on lab, to decouple our storage from our pods, we will create a
persistent volume to mount for use by our pods. We will deploy a mongodb image
that will contain a MongoDB database. We will first create the persistent
volume, then create the pod YAML for deploying the pod to mount the volume.
We will then delete the pod and create a new pod, which will access that same
volume.
Log in to the Kube Master server using the credentials on the lab page (either
in your local terminal, using the Instant Terminal feature, or using the public
IP), and work through the objectives listed.
## Create a PersistentVolume.
1. Use the following YAML spec for the PersistentVolume named mongodb-pv.yaml:
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
```
2. Then, create the PersistentVolume:
```bash
kubectl apply -f mongodb-pv.yaml
```
## Create a PersistentVolumeClaim.
1. Use the following YAML spec for the PersistentVolumeClaim named
mongodb-pvc.yaml:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
```
2. Then, create the PersistentVolumeClaim:
```bash
kubectl apply -f mongodb-pvc.yaml
```
## Create a pod from the mongodb image, with a mounted volume to mount path /data/db.
1. Use the following YAML spec for the pod named mongodb-pod.yaml:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mongodb
spec:
containers:
- image: mongo
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP
volumes:
- name: mongodb-data
persistentVolumeClaim:
claimName: mongodb-pvc
```
2. Then, create the pod:
```bash
kubectl apply -f mongodb-pod.yaml
```
3. Verify the pod was created:
```bash
kubectl get pods
```
## Access the node and view the data within the volume.
1. Run the following command:
```bash
kubectl get nodes
```
2. Connect to the worker node (get the <node_hostname> from the NAME column of
the above output), using the same password as the Kube Master:
```bash
ssh <node_hostname>
```
3. Switch to the /mnt/data directory:
```bash
cd /mnt/data
```
4. List the contents of the directory:
```bash
ls
```
## Delete the pod and create a new pod with the same YAML spec.
1. Exit out of the worker node:
```bash
exit
```
2. Delete the pod:
```bash
kubectl delete pod mongodb
```
3. Create a new pod:
```bash
kubectl apply -f mongodb-pod.yaml
```
## Verify the data still resides on the volume.
1. Log in to the worker node again:
```bash
ssh <node_hostname>
```
2. Switch to the /mnt/data directory:
```bash
cd /mnt/data
```
3. List the contents of the directory:
```bash
ls
```
# Creating a ClusterRole to Access a PV in Kubernetes (*)
We have been given access to a three-node cluster. Within that cluster, a PV
has already been provisioned. We will need to make sure we can access the PV
directly from a pod in our cluster. By default, pods cannot access PVs
directly, so we will need to create a ClusterRole and test the access after
it's been created. Every ClusterRole requires a ClusterRoleBinding to bind the
role to a user, service account, or group. After we have created the
ClusterRole and ClusterRoleBinding, we will try to access the PV directly
from a pod.
Log in to the Kube Master server using the credentials on the lab page (either
in your local terminal, using the Instant Terminal feature, or using the public
IP), and work through the objectives listed.
## View the Persistent Volume.
1. Use the following command to view the Persistent Volume within the cluster:
```bash
kubectl get pv
```
## Create a ClusterRole.
1. Use the following command to create the ClusterRole:
```bash
kubectl create clusterrole pv-reader --verb=get,list \
--resource=persistentvolumes
```
## Create a ClusterRoleBinding.
1. Use the following command to create the ClusterRoleBinding:
```bash
kubectl create clusterrolebinding pv-test --clusterrole=pv-reader \
--serviceaccount=web:default
```
## Create a pod to access the PV.
1. Use the following YAML to create a pod that will proxy the connection and
allow you to curl the address:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: curlpod
namespace: web
spec:
containers:
- image: tutum/curl
command: ["sleep", "9999999"]
name: main
- image: linuxacademycontent/kubectl-proxy
name: proxy
restartPolicy: Always
```
2. Use the following command to create the pod:
```bash
kubectl apply -f curlpod.yaml
```
## Request access to the PV from the pod.
1. Use the following command (from within the pod) to access a shell from the
pod:
```bash
kubectl exec -it curlpod -n web -- sh
```
2. Use the following command to curl the PV resource:
```bash
curl localhost:8001/api/v1/persistentvolumes
```
以上是关于markdown 云原生认证Kubernetes管理员(CKA)的主要内容,如果未能解决你的问题,请参考以下文章
《云原生入门级开发者认证》学习笔记之云原生基础设施之Kubernetes
云原生训练营模块六 Kubernetes 控制平面组件:API Server
云原生训练营模块六 Kubernetes 控制平面组件:API Server