k8s1.5.4挂载volume之glusterfs
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s1.5.4挂载volume之glusterfs相关的知识,希望对你有一定的参考价值。
k8s1.5.4挂载volume之glusterfs
volume的例子集合
https://github.com/kubernetes/kubernetes/tree/master/examples/volumes
http://www.dockerinfo.net/2926.html
http://dockone.io/article/2087
https://www.kubernetes.org.cn/1146.html
https://kubernetes.io/docs/user-guide/volumes/
k8s集群安装部署
http://jerrymin.blog.51cto.com/3002256/1898243
k8s集群RC、SVC、POD部署
http://jerrymin.blog.51cto.com/3002256/1900260
k8s集群组件kubernetes-dashboard和kube-dns部署
http://jerrymin.blog.51cto.com/3002256/1900508
k8s集群监控组件heapster部署
http://jerrymin.blog.51cto.com/3002256/1904460
k8s集群反向代理负载均衡组件部署
http://jerrymin.blog.51cto.com/3002256/1904463
k8s集群挂载volume之nfs
http://jerrymin.blog.51cto.com/3002256/1906778
glusterfs参考文档
https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/glusterfs
http://gluster.readthedocs.io/en/latest/Administrator%20Guide/
http://blog.gluster.org/2016/03/persistent-volume-and-claim-in-openshift-and-kubernetes-using-glusterfs-volume-plugin/
https://docs.openshift.org/latest/install_config/persistent_storage/persistent_storage_glusterfs.html
The example assumes that you have already set up a Glusterfs server cluster and the Glusterfs client package is installed on all Kubernetes nodes.
需要先在node节点安装部署gluster集群环境,可以参考http://www.linuxidc.com/Linux/2017-02/140517.htm
1,glusterfs服务端和客户端环境部署:
CentOS 安装 glusterfs 非常的简单
在三个节点都安装glusterfs
yum install centos-release-gluster
yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
配置 GlusterFS 集群:
启动 glusterFS
systemctl start glusterd.service
systemctl enable glusterd.service
配置
[[email protected] glusterfs]# cd /usr/local/kubernetes/examples/volumes/glusterfs
[[email protected] glusterfs]# ls
glusterfs-endpoints.json glusterfs-pod.json glusterfs-service.json README.md
[[email protected] glusterfs]# gluster peer probe k8s-master
peer probe: success. Probe on localhost not needed
[[email protected] glusterfs]# gluster peer probe k8s-node1
peer probe: success.
[[email protected] glusterfs]# gluster peer probe k8s-node2
peer probe: success.
查看集群状态
[[email protected] glusterfs]# gluster peer status
Number of Peers: 2
Hostname: k8s-node1
Uuid: 4853baab-e8fb-41ad-9a93-bfb5f0d55692
State: Peer in Cluster (Connected)
Hostname: k8s-node2
Uuid: 2c9dea85-2305-4989-a74a-970f7eb08093
State: Peer in Cluster (Connected)
创建数据存储目录
[[email protected] glusterfs]# mkdir -p /data/gluster/data
[[email protected] ~]# mkdir -p /data/gluster/data
[[email protected] ~]# mkdir -p /data/gluster/data
创建volume使用默认DHT也叫分布卷: 将文件已hash算法随机分布到 一台服务器节点中存储。
[[email protected] glusterfs]# gluster volume create glusterfsdata replica 3 k8s-master:/data/gluster/data k8s-node1:/data/gluster/data k8s-node2:/data/gluster/data force
volume create: glusterfsdata: success: please start the volume to access data
查看volume信息
[[email protected] glusterfs]# gluster volume info
Volume Name: glusterfsdata
Type: Replicate
Volume ID: 100d1f33-fb0d-48c3-9a93-d08c2e2dabb3
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: k8s-master:/data/gluster/data
Brick2: k8s-node1:/data/gluster/data
Brick3: k8s-node2:/data/gluster/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
启动volume
[[email protected] glusterfs]# gluster volume start glusterfsdata
volume start: glusterfsdata: success
测试客户端
[[email protected] glusterfs]# mount -t glusterfs k8s-master:glusterfsdata /mnt
[[email protected] glusterfs]# df -h|grep gluster
k8s-master:glusterfsdata 422G 934M 421G 1% /mnt
[[email protected] mnt]# echo glusterfs > glusterfs
[[email protected] mnt]# cat glusterfs
glusterfs
2,k8s集群挂载glusterfs
注意endpoint改成glusterfs服务集群的一个节点的IP,配置文件是2个IP所以填2个节点IP
[[email protected] glusterfs]# vim glusterfs-endpoints.json
[[email protected] glusterfs]# kubectl create -f glusterfs-endpoints.json
endpoints "glusterfs-cluster" created
[[email protected] glusterfs]# kubectl get ep |grep glusterfs
glusterfs-cluster 172.17.3.7:1,172.17.3.8:1 1m
[[email protected] glusterfs]# kubectl create -f glusterfs-service.json
service "glusterfs-cluster" created
注意volumeMounts挂载name改成上面新建的glusterfs volume
"volumes": [
{
"name": "glusterfsvol",
"glusterfs": {
"endpoints": "glusterfs-cluster",
"path": "glusterfsdata",
"readOnly": true
}
}
]
[[email protected] glusterfs]# kubectl create -f glusterfs-pod.json
pod "glusterfs" created
[[email protected] glusterfs]# kubectl get pod -o wide |grep glus
glusterfs 1/1 Running 0 4m 10.1.39.8 k8s-node1
[[email protected] ~]# mount | grep gluster
172.17.3.7:glusterfsdata on /var/lib/kubelet/pods/61cd4cec-0955-11e7-a8c3-c81f66d97bc3/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (ro,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
可见github上面例子不太直观,版本比较老,但是过程比较清楚。这里也可以参考nfs挂载把glusterfs挂载到nginx站点,那样测试更直观。下面是升级测试
创建PV/PVC
[[email protected] glusterfs]# cat glusterfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-default-volume
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "glusterfsdata"
readOnly: false
[[email protected] glusterfs]# cat glusterfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
[[email protected] glusterfs]# kubectl create -f glusterfs-pv.yaml
persistentvolume "gluster-default-volume" created
[[email protected] glusterfs]# kubectl create -f glusterfs-pvc.yaml
persistentvolumeclaim "glusterfs-claim" created
[[email protected] glusterfs]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
glusterfs-claim Bound gluster-default-volume 8Gi RWX 2m
创建nginx站点挂载glusterfs-claim
[[email protected] glusterfs]# kubectl create -f glusterfs-web-rc.yaml
replicationcontroller "glusterfs-web" created
[[email protected] glusterfs]# kubectl create -f glusterfs-web-service.yaml
service "glusterfs-web" created
配置文件如下
[[email protected] glusterfs]# cat glusterfs-web-rc.yaml
# This pod mounts the nfs volume claim into /usr/share/nginx/html and
# serves a simple web page.
apiVersion: v1
kind: ReplicationController
metadata:
name: glusterfs-web
spec:
replicas: 2
selector:
role: glusterfs-web-frontend
template:
metadata:
labels:
role: glusterfs-web-frontend
spec:
containers:
- name: glusterfsweb
image: nginx
imagePullPolicy: IfNotPresent
ports:
- name: glusterfsweb
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: gluster-default-volume
mountPath: "/usr/share/nginx/html"
volumes:
- name: gluster-default-volume
persistentVolumeClaim:
claimName: glusterfs-claim
[[email protected] glusterfs]# cat glusterfs-web-service.yaml
kind: Service
apiVersion: v1
metadata:
name: glusterfs-web
spec:
ports:
- port: 80
selector:
role: glusterfs-web-frontend
验证
[[email protected] glusterfs]# kubectl get pods -o wide |grep glusterfs-web
glusterfs-web-280mz 1/1 Running 0 1m 10.1.39.12 k8s-node1
glusterfs-web-f952d 1/1 Running 0 1m 10.1.15.10 k8s-node2
[[email protected] glusterfs]# kubectl exec -ti glusterfs-web-280mz -- bash
[email protected]:/# df -h |grep glusterfs
172.17.3.7:glusterfsdata 422G 954M 421G 1% /usr/share/nginx/html
[email protected]:/# cd /usr/share/nginx/html/
[email protected]:/usr/share/nginx/html# ls
glusterfs
[email protected]:/usr/share/nginx/html# cat glusterfs
glusterfs
本文出自 “jerrymin” 博客,请务必保留此出处http://jerrymin.blog.51cto.com/3002256/1907274
以上是关于k8s1.5.4挂载volume之glusterfs的主要内容,如果未能解决你的问题,请参考以下文章