k8s-示例

Posted 小怪獣55

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s-示例相关的知识,希望对你有一定的参考价值。

1.手动调整pod数量

kubectl scale 对运行在k8s 环境中的pod 数量进行扩容(增加)或缩容(减小)

#产看当前pod数量
root@k8s-master:/usr/local/haproxy_exporter# kubectl get deployment -n linux36
NAME READY UP-TO-DATE AVAILABLE AGE
linux36-nginx-deployment 1/1 1 1 21h
linux36-tomcat-app1-deployment 1/1 1 1 21h

#查看命令使用帮助
root@k8s-master:/usr/local/haproxy_exporter# kubectl --help | grep scale
scale Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController

#执行扩容/缩容
root@k8s-master:/usr/local/haproxy_exporter# kubectl scale deployment/linux36-tomcat-app1-deployment --replicas=2 -n linux36
deployment.extensions/linux36-tomcat-app1-deployment scaled

#验证
root@k8s-master:/usr/local/haproxy_exporter# kubectl get deployment -n linux36
NAME READY UP-TO-DATE AVAILABLE AGE
linux36-nginx-deployment 1/1 1 1 21h
linux36-tomcat-app1-deployment 2/2 2 2 21h

2.HPA自动伸缩pod数量

kubectl autoscale 自动控制在k8s集群中运行的pod数量(水平自动伸缩),需要提前设置pod范围及触发条件

k8s从1.1版本开始增加了名称为HPA(Horizontal Pod Autoscaler)的控制器,
用于实现基于pod中资源(CPU/Memory)利用率进行对pod的自动扩缩容功能的实现,
早期的版本只能基于Heapster组件实现对CPU利用率做为触发条件,
但是在k8s 1.11版本开始使用Metrices Server完成数据采集,然后将采集到的数据通过API(Aggregated API,
汇总API),例如metrics.k8s.io、custom.metrics.k8s.io、external.metrics.k8s.io,然后
再把数据提供给HPA控制器进行查询,以实现基于某个资源利用率对pod进行扩缩容的目的。
控制管理器默认每隔15s(可以通过–horizontal-pod-autoscaler-sync-period修改)查询metrics的资源使用情况
#支持以下三种metrics类型:
->预定义metrics(比如Pod的CPU)以利用率的方式计算
->自定义的Pod metrics,以原始值(raw value)的方式计算
->自定义的object metrics

#支持两种metrics查询方式:
->Heapster
->自定义的REST API

#支持多metrics

k8s-示例_k8s-示例

2.1.准备metrics-server

使用metrics-server作为HPA数据源

clone代码:

git clone https://github.com/kubernetes-incubator/metrics-server.git
cd metrics-server/

镜像

docker pull k8s.gcr.io/metrics-server-amd64:v0.3.3

docker load -i metrics-server-amd64_v0.3.3.tar.gz
docker tag 1a76c5318f6d harbor.gesila.com/k8s/metrics-server-amd64:v0.3.3
docker push harbor.gesila.com/k8s/metrics-server-amd64:v0.3.3

k8s-示例_k8s-示例_02

2.2.yaml文件

修改镜像源

metrics-server-deployment.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir:
containers:
- name: metrics-server
#image: k8s.gcr.io/metrics-server-amd64:v0.3.0
image: harbor.gesila.com/k8s/metrics-server-amd64:v0.3.3
imagePullPolicy: IfNotPresent
command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-insecure-tls
volumeMounts:
- name: tmp-dir
mountPath: /tmp

2.3.创建metrics-server服务

root@k8s-master:~/metrics-server-master# kubectl apply -f deploy/1.8+/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.extensions/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

k8s-示例_k8s-示例_03

kubectl get pods  -n kube-system

k8s-示例_k8s-示例_04

2.4.修改controller-manager启动参数

kube-controller-manager --help | grep horizontal-pod-autoscaler-sync-period

k8s-示例_k8s-示例_05

vim /etc/systemd/system/kube-controller-manager.service
----------------------------------------------------------------------------
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-controller-manager \\
--address=127.0.0.1 \\
--master=http://127.0.0.1:8080 \\
--allocate-node-cidrs=true \\
--service-cluster-ip-range=10.20.0.0/16 \\
--cluster-cidr=172.31.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--horizontal-pod-autoscaler-use-rest-clients=true \\
--leader-elect=true \\
--horizontal-pod-autoscaler-use-rest-clients=false \\#不使用其他客户端数据
--horizontal-pod-autoscaler-sync-period=10s \\#数据采集周期间隔时间
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

k8s-示例_k8s-示例_06

systemctl daemon-reload && systemctl restart kube-controller-manager
ps -ef |grep kube-controller-manager

k8s-示例_k8s-示例_07

2.5.通过命令配置扩缩容

root@k8s-master:~/metrics-server-master# kubectl get pods -n linux36
NAME READY STATUS RESTARTS AGE
linux36-nginx-deployment-598cb57658-7725v 1/1 Running 0 3h39m
linux36-tomcat-app1-deployment-74c7768479-877fm 1/1 Running 1 27h
root@k8s-master:~/metrics-server-master# kubectl get deployment -n linux36
NAME READY UP-TO-DATE AVAILABLE AGE
linux36-nginx-deployment 1/1 1 1 27h
linux36-tomcat-app1-deployment 1/1 1 1 27h
root@k8s-master:~/metrics-server-master# kubectl autoscale deployment/linux36-nginx-deployment --min=1 --max=3 --cpu-percent=80 -n linux36
horizontalpodautoscaler.autoscaling/linux36-nginx-deployment autoscaled

#验证信息:
kubectl describe deployment/linux36-nginx-deployment -n linux36
--------------------------------------------
DESIRED 最终期望处于READY状态的副本数
CURRENT 当前的副本总数
UP-TO-DATE 当前完成更新的副本数
AVAILABLE 当前可用的副本数

k8s-示例_k8s-示例_08

2.6.yaml文件中定义扩缩容配置

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: linux36-tomcat-app1-deployment-label
name: linux36-tomcat-app1-deployment
namespace: linux36
spec:
replicas: 1
selector:
matchLabels:
app: linux36-tomcat-app1-selector
template:
metadata:
labels:
app: linux36-tomcat-app1-selector
spec:
containers:
- name: linux36-tomcat-app1-container
image: harbor.magedu.net/linux36/tomcat-app1:v1
#command: ["/apps/tomcat/bin/run_tomcat.sh"]
#imagePullPolicy: IfNotPresent
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 2
memory: "2048Mi"
requests:
cpu: 500m
memory: "1024Mi"
volumeMounts:
- name: linux36-images
mountPath: /data/tomcat/webapps/myapp/images
readOnly: false
- name: linux36-static
mountPath: /data/tomcat/webapps/myapp/static
readOnly: false
volumes:
- name: linux36-images
nfs:
server: 192.168.47.47
path: /data/k8sdata/linux36/images
- name: linux36-static
nfs:
server: 192.168.47.47
path: /data/k8sdata/linux36/static
#nodeSelector: #位置在当前containers参数结束后的部分
# project: linux36 #指定的label标签


---
kind: Service
apiVersion: v1
metadata:
labels:
app: linux36-tomcat-app1-service-label
name: linux36-tomcat-app1-service
namespace: linux36
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 30003
selector:
app: linux36-tomcat-app1-selector

---
apiVersion: autoscaling/v2beta1 #定义API版本
kind: HorizontalPodAutoscaler #对象类型
metadata: #定义对象元数据
namespace: linux36 #创建后隶属的namespace
name: linux36-tomcat-app1-podautoscaler #对象名称
labels: 这样的label标签
app: linux36-tomcat-app1 #自定义的label名称
version: v2beta1 #自定义的api版本

spec: #定义对象具体信息
scaleTargetRef: #定义水平伸缩的目标对象,Deployment、ReplicationController/ReplicaSet
apiVersion: apps/v1
#API版本,HorizontalPodAutoscaler.spec.scaleTargetRef.apiVersion
kind: Deployment #目标对象类型为deployment
name: linux36-tomcat-app1-deployment #deployment 的具体名称
minReplicas: 2 #最小pod数
maxReplicas: 5 #最大pod数
metrics: #调用metrics数据定义
- type: Resource #类型为资源
resource: #定义资源
name: cpu #资源名称为cpu
targetAverageUtilization: 80 #CPU使用率
- type: Resource #类型为资源
resource: #定义资源
name: memory #资源名称为memory
targetAverageValue: 200Mi #memory使用率

2.7.验证HPA

kubectl get hpa -n linux36
kubectl describe hpa linux36-nginx-deployment -n linux36

k8s-示例_k8s-示例_09

3.动态修改资源内容kubectl edit

用于临时修改某些配置后需要立即生效的场景

root@k8s-master:/usr/local/haproxy_exporter# kubectl get deployment -n linux36
NAME READY UP-TO-DATE AVAILABLE AGE
linux36-nginx-deployment 1/1 1 1 21h
linux36-tomcat-app1-deployment 1/1 1 1 21h


#修改副本数/镜像地址
kubectl edit deployment linux36-nginx-deployment -n linux36


#验证副本数是否与edit编辑之后的一致
root@k8s-master:/usr/local/haproxy_exporter# kubectl get deployment -n linux36
NAME READY UP-TO-DATE AVAILABLE AGE
linux36-nginx-deployment 2/2 2 2 21h
linux36-tomcat-app1-deployment 1/1 1 1 21h

root@k8s-master:/usr/local/haproxy_exporter# kubectl get pods -n linux36
NAME READY STATUS RESTARTS AGE
linux36-nginx-deployment-6d858d49d-2l6pd 1/1 Running 1 21h
linux36-nginx-deployment-6d858d49d-rdhbg 1/1 Running 0 31s
linux36-tomcat-app1-deployment-74c7768479-877fm 1/1 Running 1 21h

k8s-示例_k8s-示例_10

k8s-示例_k8s-示例_11企业运维实战-k8s学习笔记17.k8s集群+Prometheus监控部署基于prometheus实现k8s集群的hpa动态伸缩虚拟机部署prometheus监控

关于k8s Pod的自动水平伸缩(HPA)

混合云K8s容器化应用弹性伸缩实战

混合云K8s容器化应用弹性伸缩实战

k8s的名称空间标签deployment控制器弹性伸缩

k8s-自动横向伸缩pod 与节点