运维实战 kubernetes(k8s) 之 service
Posted 123坤
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了运维实战 kubernetes(k8s) 之 service相关的知识,希望对你有一定的参考价值。
@[TOC]( 运维实战 kubernetes(k8s) 之 service )
1. service 介绍
- Service可以看作是一组提供相同服务的Pod对外的访问接口。借助Service,应用可以方便地实现服务发现和负载均衡。
- service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)
- service的类型:
ClusterIP:默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问。
NodePort:将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP。
LoadBalancer:在 NodePort 的基础上,借助 cloud provider 创建一个外部的负载均衡器,并将请求转发到 :NodePort,此模式只能在云服务器上使用。
ExternalName:将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定)。 - Service 是由 kube-proxy 组件,加上 iptables 来共同实现的.
- kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源。
- IPVS模式的 service,可以使K8s集群支持更多量级的Pod。
先启动仓库,然后输入变量,查看kubectl
的状态;
[root@server1 ~]# cd harbor/
[root@server1 harbor]# docker-compose start
[root@server2 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[root@server2 ~]# kubectl get pod -n kube-system
2. 开启 kube-proxy 的 ipvs 模式
要确保仓库的存在;此处用的是本机的源;
[root@server2 k8s]# cd /etc/yum.repos.d/
[root@server2 yum.repos.d]# ls
docker.repo dvd.repo k8s.repo redhat.repo
[root@server2 yum.repos.d]# vim k8s.repo
[root@server2 yum.repos.d]# cat k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=0 ##将此处改为0,不启用该源;使用本机的源来安装
gpgcheck=0
每个结点安装yum install -y ipvsadm
软件;
安装完成之后:
[root@server2 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@server2 ~]# ipvsadm -ln
##查看 iptables 的规则,它是内核功能
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@server2 ~]# lsmod | grep ip_vs
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 133095 10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_nat_masquerade_ipv6,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
[root@server2 ~]# kubectl -n kube-system get cm
## 查看配置信息
NAME DATA AGE
coredns 1 25h
extension-apiserver-authentication 6 25h
kube-flannel-cfg 2 24h
kube-proxy 2 25h
kube-root-ca.crt 1 25h
kubeadm-config 2 25h
kubelet-config-1.21 1 25h
[root@server2 ~]# kubectl -n kube-system edit cm kube-proxy
##编辑配置信息,指定使用 ipvs 的模式,不写时默认用的是 iptables
configmap/kube-proxy edited
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: "ipvs"
configmap/kube-proxy edited
修改完信息之后,需要重载;由于当前的服务是由控制器所管理,此时只需删除之前的pod ,会再次读取配置文件重新拉取pod;
kube-proxy 通过 linux 的 IPVS 模块,以 rr 轮询方式调度 service 的Pod。
[root@server2 ~]# kubectl -n kube-system get daemonsets.apps
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel-ds 3 3 3 3 3 <none> 24h
kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 25h
[root@server2 ~]# kubectl -n kube-system get pod | grep kube-proxy | awk '{system("kubectl -n kube-system delete pod "$1"")}'
pod "kube-proxy-866lg" deleted
pod "kube-proxy-hxgbt" deleted
pod "kube-proxy-jrc9z" deleted
[root@server2 ~]# ipvsadm -ln
##重启之后,此时在每个结点上都可以看到 iptables 策略;其中10.96.0.10是 CLUSTER-IP 的地址;0.8 和 0.9 是 dns 所在 pod 的地址;
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 172.25.25.2:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.8:53 Masq 1 0 0
-> 10.244.0.9:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.8:9153 Masq 1 0 0
-> 10.244.0.9:9153 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.8:53 Masq 1 0 0
-> 10.244.0.9:53 Masq 1 0 0
[root@server2 ~]# kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 8d
[root@server2 ~]# kubectl -n kube-system get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-85ffb569d4-85kp7 1/1 Running 3 8d 10.244.0.9 server2 <none> <none>
coredns-85ffb569d4-bd579 1/1 Running 3 8d 10.244.0.8 server2 <none> <none>
etcd-server2 1/1 Running 3 8d 172.25.25.2 server2 <none> <none>
kube-apiserver-server2 1/1 Running 3 8d 172.25.25.2 server2 <none> <none>
kube-controller-manager-server2 1/1 Running 3 8d 172.25.25.2 server2 <none> <none>
kube-flannel-ds-f8qhr 1/1 Running 2 8d 172.25.25.4 server4 <none> <none>
kube-flannel-ds-hvfwp 1/1 Running 2 8d 172.25.25.3 server3 <none> <none>
kube-flannel-ds-mppbp 1/1 Running 3 8d 172.25.25.2 server2 <none> <none>
kube-proxy-6f78h 1/1 Running 0 4m10s 172.25.25.2 server2 <none> <none>
kube-proxy-7jvkr 1/1 Running 0 4m12s 172.25.25.4 server4 <none> <none>
kube-proxy-9d5s7 1/1 Running 0 4m5s 172.25.25.3 server3 <none> <none>
kube-scheduler-server2 1/1 Running 3 8d 172.25.25.2 server2 <none> <none>
IPVS 模式下,kube-proxy 会在 service 创建后,在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配 service IP。
[root@server2 ~]# ip addr show kube-ipvs0
10: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 52:54:5e:c0:51:56 brd ff:ff:ff:ff:ff:ff
inet 10.96.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
新建一个来观察效果:
[root@server2 k8s]# kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
[root@server2 k8s]# ls
cronjob.yaml daemonset.yaml deployment.yaml job.yaml pod.yaml rs.yaml svc.yaml
[root@server2 k8s]# vim deployment.yaml
[root@server2 k8s]# kubectl apply -f deployment.yaml
deployment.apps/deployment-example created
[root@server2 k8s]# kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-example-5b768f7647-9wlvc 1/1 Running 0 4s
deployment-example-5b768f7647-j6bvs 1/1 Running 0 4s
deployment-example-5b768f7647-ntmk7 1/1 Running 0 4s
[root@server2 k8s]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
deployment-example-5b768f7647-9wlvc 1/1 Running 0 52s app=nginx,pod-template-hash=5b768f7647
deployment-example-5b768f7647-j6bvs 1/1 Running 0 52s app=nginx,pod-template-hash=5b768f7647
deployment-example-5b768f7647-ntmk7 1/1 Running 0 52s app=nginx,pod-template-hash=5b768f7647
[root@server2 k8s]# ipvsadm -ln
##此时虽然已经有了 pod 但是并没有加进去,没有 svc。
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 172.25.25.2:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.8:53 Masq 1 0 0
-> 10.244.0.9:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.8:9153 Masq 1 0 0
-> 10.244.0.9:9153 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.8:53 Masq 1 0 0
-> 10.244.0.9:53 Masq 1 0 0
将指令转为 yaml
文件;
[root@server2 k8s]# kubectl expose deployment deployment-example --port=80 --target-port=80
service/deployment-example exposed
[root@server2 k8s]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deployment-example ClusterIP 10.105.194.76 <none> 80/TCP 8s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
[root@server2 k8s]# kubectl describe svc deployment-example
Name: deployment-example
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.105.194.76
IPs: 10.105.194.76
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.26:80,10.244.2.33:80,10.244.2.34:80
Session Affinity: None
Events: <none>
[root@server2 k8s]# kubectl get svc deployment-example -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-06-12T13:30:52Z"
name: deployment-example
namespace: default
resourceVersion: "60216"
uid: 7729b22e-4e26-4e6e-afa1-7c4e0a37e019
spec:
clusterIP: 10.105.194.76
clusterIPs:
- 10.105.194.76
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
[root@server2 k8s]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 172.25.25.2:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.8:53 Masq 1 0 0
-> 10.244.0.9:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.8:9153 Masq 1 0 0
-> 10.244.0.9:9153 Masq 1 0 0
TCP 10.105.194.76:80 rr
##此时查看时,会有三个pod
-> 10.244.1.26:80 Masq 1 0 0
-> 10.244.2.33:80 Masq 1 0 0
-> 10.244.2.34:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.8:53 Masq 1 0 0
-> 10.244.0.9:53 Masq 1 0 0
此时测试时,会负载均衡到后端的三个 pod 上
[root@server2 k8s]# curl 10.105.194.76
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server2 k8s]# curl 10.105.194.76/hostname.html
deployment-example-5b768f7647-j6bvs
[root@server2 k8s]# curl 10.105.194.76/hostname.html
deployment-example-5b768f7647-9wlvc
[root@server2 k8s]# curl 10.105.194.76/hostname.html
deployment-example-5b768f7647-ntmk7
[root@server2 k8s]# ipvsadm -ln
测试之后,可以用此命令查看调度的次数
当用命令kubectl delete svc deployment-example
将服务删除时,此时也就在 ipvs
中看不到信息。
除了上述用指令生成 yaml
文件的方法之外,还可以直接编写 yaml
文件;
[root@server2 k8s]# vim svc.yaml
[root@server2 k8s]# cat svc.yaml
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
[root@server2 k8s]# kubectl apply -f svc.yaml
service/myservice created
[root@server2 k8s]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
myservice ClusterIP 10.104.41.30 <none> 80/TCP 5s
[root@server2 k8s]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 172.25.25.2:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.8:53 Masq 1 0 0
-> 10.244.0.9:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
以上是关于运维实战 kubernetes(k8s) 之 service的主要内容,如果未能解决你的问题,请参考以下文章
运维实战 kubernetes(k8s) 之 pod 的建立
运维实战 kubernetes(k8s) 之 pod 的建立