k8s-监控组件:heaper部署

Posted 小怪獣55

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s-监控组件:heaper部署相关的知识,希望对你有一定的参考价值。

ansible搭建k8s参考:​​https://blog.51cto.com/taowenwu/5222088​

heapster:数据采集

influxdb:数据存储

grafana:web展示

1.准备镜像及配置文件

mkdir /etc/ansible/manifests/dns/kube-dns/heapster
cd /etc/ansible/manifests/dns/kube-dns/heapster

k8s-监控组件:heaper部署_k8s-监控组件:heaper部署

2.导入镜像及上传到本地harbor仓库

docker load -i heapster-amd64_v1.5.1.tar && \\
docker load -i heapster-grafana-amd64-v4.4.3.tar && \\
docker load -i heapster-influxdb-amd64_v1.3.3.tar

docker tag 4129aa919411 harbor.gesila.com/k8s/heapster-amd64:v1.5.1 && \\
docker tag 8cb3de219af7 harbor.gesila.com/k8s/heapster-grafana-amd64:v4.4.3 && \\
docker tag 1315f002663c harbor.gesila.com/k8s/heapster-influxdb-amd64:v1.3.3

docker push harbor.gesila.com/k8s/heapster-amd64:v1.5.1 && \\
docker push harbor.gesila.com/k8s/heapster-grafana-amd64:v4.4.3 && \\
docker push harbor.gesila.com/k8s/heapster-influxdb-amd64:v1.3.3

3.修改配置文件镜像源

vim grafana.yaml
-name: grafana
images: harbor.gesila.com/k8s/heapster-grafana-amd64:v4.4.3

vim heapster.yaml
-name: heapster
images: harbor.gesila.com/k8s/heapster-amd64:v1.5.1

vim influxdb.yaml
-name: influxdb
images: harbor.gesila.com/k8s/heapster-influxdb-amd64:v1.3.3

4.构建

root@k8s-master:/etc/ansible/manifests/dns/kube-dns/heapster# kubectl apply -f .
deployment.extensions/monitoring-grafana created
service/monitoring-grafana created
serviceaccount/heapster created
clusterrolebinding.rbac.authorization.k8s.io/heapster created
deployment.extensions/heapster created
service/heapster created
deployment.extensions/monitoring-influxdb created
service/monitoring-influxdb created

5.查看

root@k8s-master:/etc/ansible/manifests/dns/kube-dns/heapster# kubectl get pods -n kube-system 
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-64dfd5bf4c-q94ls 1/1 Running 0 3h18m
calico-node-9ccdk 2/2 Running 2 23h
calico-node-k297m 2/2 Running 4 23h
calico-node-w6m6p 2/2 Running 10 23h
heapster-7689489d99-dlz5s 1/1 Running 0 20s
kube-dns-5744cc9dff-rxgjk 3/3 Running 0 45m
kubernetes-dashboard-7b5f5b777c-s7djw 1/1 Running 0 3h18m
monitoring-grafana-6949dd99d6-4dcdq 1/1 Running 0 21s
monitoring-influxdb-7cb4988b9c-ktqgz 1/1 Running 0 21s
root@k8s-master:/etc/ansible/manifests/dns/kube-dns/heapster# kubectl cluster-info
Kubernetes master is running at https://192.168.47.49:6443
KubeDNS is running at https://192.168.47.49:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://192.168.47.49:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
monitoring-grafana is running at https://192.168.47.49:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://192.168.47.49:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

To further debug and diagnose cluster problems, use kubectl cluster-info dump.

k8s-监控组件:heaper部署_k8s-监控组件:heaper部署_02

k8s-监控组件:heaper部署_k8s-监控组件:heaper部署_03

6.文件

6.1.grafana.yaml

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
- name: grafana
image: harbor.gesila.com/k8s/heapster-grafana-amd64:v4.4.3
imagePullPolicy: Always
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /var
name: grafana-storage
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
# The following env variables are required to make Grafana accessible via
# the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If youre only using the API Server proxy, set this value instead:
value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/
#value: /
volumes:
- name: grafana-storage
emptyDir:
---
apiVersion: v1
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: true
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
# In a production setup, we recommend accessing Grafana through an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a randomly-generated port
# type: NodePort
ports:
- port: 80
targetPort: 3000
selector:
k8s-app: grafana

6.2.heapster.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: heapster
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
roleRef:
kind: ClusterRole
name: system:heapster
apiGroup: rbac.authorization.k8s.io
---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
image: harbor.gesila.com/k8s/heapster-amd64:v1.5.1
imagePullPolicy: Always
command:
- /heapster
- --source=kubernetes:https://kubernetes.default
- --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
#kubernetes.io/cluster-service: true
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster

6.3.influxdb.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: harbor.gesila.com/k8s/heapster-influxdb-amd64:v1.3.3
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir:
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: true
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb

以上是关于k8s-监控组件:heaper部署的主要内容,如果未能解决你的问题,请参考以下文章

K8s 场景下 Logtail 组件可观测方案升级-Logtail 事件监控发布

Kubernetes(k8s)之在集群环境部署Prometheus(普罗米修斯监控)和集群的ui管理工具Grafana

企业运维实战-k8s学习笔记17.k8s集群+Prometheus监控部署基于prometheus实现k8s集群的hpa动态伸缩虚拟机部署prometheus监控

k8s之Weave Scope监控

Ceph高可用部署和主要组件介绍

使用kube-prometheus部署k8s监控