Best Practice of DevOps for Develop Microservice 11 - Cassadra

Posted 仗剑走云端

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Best Practice of DevOps for Develop Microservice 11 - Cassadra相关的知识,希望对你有一定的参考价值。

StatefulSetsmake it easier to deploy stateful applications within a clustered environment.

We use NFS as persistent storage

1 - Prepare Shared NFS FileSystem

[root@oci-k8s-admintools pvdata1]# df -h |grep nfs
/dev/mapper/datavg-lv--nfsdata0   15G   42M   14G   1% /pvdata0
/dev/mapper/datavg-lv--nfsdata3   15G   41M   14G   1% /pvdata3
/dev/mapper/datavg-lv--nfsdata1   15G   42M   14G   1% /pvdata1
/dev/mapper/datavg-lv--nfsdata2   15G   41M   14G   1% /pvdata2
/dev/mapper/datavg-lv--nfsdata4   15G   41M   14G   1% /pvdata4
[root@oci-k8s-admintools pvdata1]# cat /etc/fstab |grep nfs
/dev/datavg/lv-nfsdata0 /pvdata0                 ext4    defaults        0 0
/dev/datavg/lv-nfsdata1 /pvdata1                 ext4    defaults        0 0
/dev/datavg/lv-nfsdata2 /pvdata2                 ext4    defaults        0 0
/dev/datavg/lv-nfsdata3 /pvdata3                 ext4    defaults        0 0
/dev/datavg/lv-nfsdata4 /pvdata4                 ext4    defaults        0 0
[root@oci-k8s-admintools pvdata1]# cat /etc/exports
/pvdata0/ 192.168.0.0/24(rw,sync,fsid=0,no_root_squash)
/pvdata1/ 192.168.0.0/24(rw,sync,fsid=1,no_root_squash)
/pvdata2/ 192.168.0.0/24(rw,sync,fsid=2,no_root_squash)
/pvdata3/ 192.168.0.0/24(rw,sync,fsid=3,no_root_squash)
/pvdata4/ 192.168.0.0/24(rw,sync,fsid=4,no_root_squash)

2 - Create 3 Persistent Volumes

Sample:

$ cat pv-nfs-dbdata0-onssd.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-dbdata0
labels:
role: storage
service: pv-nfs-dbdata
type: nfs
spec:
#A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.
storageClassName: pv-nfsdata-onssd
#storageClassName: nfs-pvdata-onGCEPD
#storageClassName: ""
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
nfs:
#NFS Server IP and Mount Point
server: 192.168.0.117
path: '/pvdata0/'

$ kubectl apply -f pv-nfs-dbdata0-onssd.yaml

Check PV Status

$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs-dbdata0 10Gi RWX Retain Available pv-nfsdata-onssd 36s
pv-nfs-dbdata1 10Gi RWX Retain Available pv-nfsdata-onssd 31s
pv-nfs-dbdata2 10Gi RWX Retain Available pv-nfsdata-onssd 27s

3 - Create headless service

The following Serviceis used for DNS lookups between Cassandra Pods and clients within the Kubernetes cluster.

$ cat cassandra.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: cassandra
  name: cassandra
spec:
  clusterIP: None
  ports:
  - port: 9042
  selector:
    app: cassandra

$ kubectl apply -f cassandra.yaml

$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra ClusterIP None <none> 9042/TCP 3h49m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h13m
[kubeadmin@oci-k8s-master depolycassandra]$ kubectl apply -f cassandra-statefulset.yaml

4 - Creating Cassandra Ring using Statefulset

$ cat cassandra-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: cassandra
  labels:
    app: cassandra
spec:
  serviceName: cassandra
  replicas: 2
  selector:
    matchLabels:
      app: cassandra
  template:
    metadata:
      labels:
        app: cassandra
    spec:
      terminationGracePeriodSeconds: 1800
      containers:
      - name: cassandra
        image: gcr.io/google-samples/cassandra:v13
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 7000
          name: intra-node
        - containerPort: 7001
          name: tls-intra-node
        - containerPort: 7199
          name: jmx
        - containerPort: 9042
          name: cql
        resources:
          limits:
            cpu: "500m"
            memory: 1Gi
          requests:
            cpu: "500m"
            memory: 1Gi
        securityContext:
          capabilities:
            add:
              - IPC_LOCK
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - nodetool drain
        env:
          - name: MAX_HEAP_SIZE
            value: 512M
          - name: HEAP_NEWSIZE
            value: 100M
          - name: CASSANDRA_SEEDS
            value: "cassandra-0.cassandra.default.svc.cluster.local"
          - name: CASSANDRA_CLUSTER_NAME
            value: "K8Demo"
          - name: CASSANDRA_DC
            value: "DC1-K8Demo"
          - name: CASSANDRA_RACK
            value: "Rack1-K8Demo"
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
        readinessProbe:
          exec:
            command:
            - /bin/bash
            - -c
            - /ready-probe.sh
          initialDelaySeconds: 15
          timeoutSeconds: 5
        # These volume mounts are persistent. They are like inline claims,
        # but not exactly because the names need to match exactly one of
        # the stateful pod volumes.
        volumeMounts:
        - name: cassandra-data
          mountPath: /cassandra_data
  # These are converted to volume claims by the controller
  # and mounted at the paths mentioned above.
  # do not use these in production until ssd GCEPersistentDisk or other ssd pd
  volumeClaimTemplates:
  - metadata:
      name: cassandra-data
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: pv-nfsdata-onssd
      resources:
        requests:
          storage: 5Gi

Check POD status

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 3m1s
cassandra-1 1/1 Running 0 2m33s

Check PV Status, the two PV change to Bound status, one is Available 

$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs-dbdata0 10Gi RWX Retain Bound default/cassandra-data-cassandra-1 pv-nfsdata-onssd 10m
pv-nfs-dbdata1 10Gi RWX Retain Available pv-nfsdata-onssd 9m57s
pv-nfs-dbdata2 10Gi RWX Retain Bound default/cassandra-data-cassandra-0 pv-nfsdata-onssd 9m53s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cassandra-data-cassandra-0 Bound pv-nfs-dbdata2 10Gi RWX pv-nfsdata-onssd 3m36s
cassandra-data-cassandra-1 Bound pv-nfs-dbdata0 10Gi RWX pv-nfsdata-onssd 3m8s

$ kubectl exec -it cassandra-0 -- nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.10.1.43 99.6 KiB 32 100.0% fc7dd0f7-63c9-4c4a-b81a-ed67886ddecc Rack1-K8Demo
UN 10.10.2.136 131.59 KiB 32 100.0% c155a9cf-d67f-4881-9e83-9fb4bc0780a6 Rack1-K8Demo

5 - Scaleout cassandra application

$ kubectl edit statefulset cassandra

 and increase replicas from 2 to 3

$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
cassandra-0 1/1 Running 0 2m49s 10.10.1.44 oci-k8s-node01 <none>
cassandra-1 1/1 Running 0 3m47s 10.10.2.137 oci-k8s-node10 <none>
cassandra-2 1/1 Running 0 92s 10.10.1.45 oci-k8s-node01 <none>

$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs-dbdata0 10Gi RWX Retain Bound default/cassandra-data-cassandra-1 pv-nfsdata-onssd 18m
pv-nfs-dbdata1 10Gi RWX Retain Bound default/cassandra-data-cassandra-2 pv-nfsdata-onssd 18m
pv-nfs-dbdata2 10Gi RWX Retain Bound default/cassandra-data-cassandra-0 pv-nfsdata-onssd 18m
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cassandra-data-cassandra-0 Bound pv-nfs-dbdata2 10Gi RWX pv-nfsdata-onssd 11m
cassandra-data-cassandra-1 Bound pv-nfs-dbdata0 10Gi RWX pv-nfsdata-onssd 11m
cassandra-data-cassandra-2 Bound pv-nfs-dbdata1 10Gi RWX pv-nfsdata-onssd 6m15s



以上是关于Best Practice of DevOps for Develop Microservice 11 - Cassadra的主要内容,如果未能解决你的问题,请参考以下文章

KVM Best practice

Dockerfile Security Best Practice

Best Practice API

MongoDB Best Practice

React 生产模式 Best Practice

OSSIM best practice 顺利登上美国主力电商平台