statefulset stateful cluster deployment for k8s

pod resources in k8s are divided into stateful (data type container) and stateless (service type container).

Data storage in k8s generally mounts storage volumes with volumes. In multi-node cluster mode, the best solution is to provide a storage system (storage system selection conditions: remote access, multi-threaded read and write, etc.), nfs, gluster, cepf, etc. provided by k8s. The specific usage method can be viewed using kubectl explain pod.spec.The general stateless service type stores files directly in a fixed directory of the storage system, which mounts when a pod of k8s is created.However, some services cannot use this storage mode, such as redis cluster, es, etc. These data are stored separately. After the pod restarts, the ip and hostname data of the pod have changed, so it is not applicable to the daily volume of stateful services.

You can create a StatefulSet resource to run these pod s instead of a ReplicaSet. They are a specially customized category of applications where each instance is an irreplaceable individual with a stable name and state.

Compare StatefulSet with ReplicaSet or ReplicationController

RS or RC managed pod copies are more like cattle. They are stateless and can be replaced by a new pod at any time.A stateful pod then needs a different method. When a stateful pod is suspended, the pod instance needs to be rebuilt on another node, but the new instance must have the same name, network identity, and state as the replaced instance.This is how StatefulSet manages pods.

StatefulSet ensures that pods retain their identity and status after rescheduling.It allows you to easily expand and shrink.Similar to RS, StatefulSet also specifies the expected number of copies, which determines the number of pets running at the same time.It is also based on the pod template, and unlike RS, StatefulSet does not create exactly the same copy of the pod.Each pod can have a separate set of data volumes (persistent state).In addition, the names of pods are regular (fixed), rather than randomly getting a name for each new pod.

Provide a stable network identity

The name of the pod created by the StatefulSet is indexed in zero-based order, which is reflected in the name of the pod and the host name, as well as in the fixed storage corresponding to the pod.


Create a statefulset service to store using nfs

To create pv based on NFS first, nfs-utils must be installed on master and node nodes or mount will not be possible

View the directory shared by nfs:

[root@k8s-3 ~]# showmount -e 192.168.191.50
Export list for 192.168.191.50:
/data/nfs/04 192.168.191.0/24
/data/nfs/03 192.168.191.0/24
/data/nfs/02 192.168.191.0/24
/data/nfs/01 192.168.191.0/24
/data/nfs    192.168.191.0/24
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02
  labels:
    app: pv02
spec:
  storageClassName: nfs
  accessModes: ["ReadWriteMany"]
  capacity:
    storage: 2Mi
  nfs:
    path: /data/nfs/02
    server: zy.nfs.com 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03
  labels:
    app: pv03
spec:
  storageClassName: nfs
  accessModes: ["ReadWriteMany"]
  capacity:
    storage: 2Mi
  nfs:
    path: /data/nfs/03
    server: zy.nfs.com
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv04
  labels:
    app: pv04
spec:
  storageClassName: nfs
  accessModes: ["ReadWriteMany"]
  capacity:
    storage: 2Mi
  nfs:
    path: /data/nfs/04
    server: zy.nfs.com

View pv

[root@k8s-3 ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                 STORAGECLASS   REASON   AGE
pv02   2Mi        RWX            Retain           Available              nfs                     5h27m
pv03   2Mi        RWX            Retain           Available              nfs                     5h27m
pv04   2Mi        RWX            Retain           Available              nfs                     5h27m

* access modes access mode,

  • ReadWriteOnce - You can install the volume read-write through a single node

  • ReadOnlyMany - The volume can be mounted read-only by many nodes

  • ReadWriteMany - This volume can be installed read and written by many nodes

* Recycling strategy for RECLAIM POLICY pv

retain After deleting the pvc, the pv is always stored and the data will not be lost

Delete automatically deletes pvc after delete

storageclass Custom Identity

Create headless service

[root@k8s-3 statefulset]# cat svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  
spec:
  clusterIP: None
  selector:
    app: sfs
  ports:
  - name: http
    port: 80
    protocol: TCP

View the headless service, noting cluster IP: None

[root@k8s-3 ~]# kubectl get svc
NAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
headless-svc   ClusterIP   None         <none>        80/TCP    11m

Create a statefulset

[root@k8s-3 statefulset]# cat statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx
spec:
  serviceName: sfs
  replicas: 2
  selector:
    matchLabels:
      app: sfs
  template:
    metadata:
      name: sfs
      labels:
        app: sfs
    spec:
      containers:
      - name: sfs
        image: nginx:latest
        ports:
        - name: http
          containerPort: 80
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www   
    spec:
      accessModes: ["ReadWriteMany"]
      storageClassName: nfs
      resources:
        requests:
          storage: 2Mi

View pod creation and pv, pvc after execution

The process of creating a pod by naming it name-0/1/2/..., creating a pod in turn

[root@k8s-3 ~]# kubectl get pod -w 
NAME      READY   STATUS    RESTARTS   AGE
nginx-0   0/1     Pending   0          0s
nginx-0   0/1     Pending   0          0s
nginx-0   0/1     Pending   0          1s
nginx-0   0/1     ContainerCreating   0          1s
nginx-0   1/1     Running             0          22s
nginx-1   0/1     Pending             0          0s
nginx-1   0/1     Pending             0          0s
nginx-1   0/1     Pending             0          0s
nginx-1   0/1     ContainerCreating   0          0s
nginx-1   1/1     Running             0          25s
#Final results
[root@k8s-3 ~]# kubectl get pod 
NAME      READY   STATUS    RESTARTS   AGE
nginx-0   1/1     Running   0          2m53s
nginx-1   1/1     Running   0          2m31s

pvc creation and pv binding

[root@k8s-3 ~]# kubectl get pvc -w
NAME          STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
www-nginx-0   Bound       pv04     2Mi        RWX     nfs       16s
www-nginx-1   Pending                             nfs       0s
www-nginx-1   Pending     pv02     0                nfs       0s
www-nginx-1   Bound       pv02     2Mi        RWX     nfs       0s
[root@k8s-3 ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                 STORAGECLASS   REASON   AGE
pv02   2Mi        RWX            Retain           Bound       default/www-nginx-1   nfs                     10m
pv03   2Mi        RWX            Retain           Available                     nfs                     10m
pv04   2Mi        RWX            Retain           Bound       default/www-nginx-0   nfs                     10m


View the relationship between headless and backend pod

[root@k8s-3 ~]# kubectl describe svc headless-svc
Name:              headless-svc
Namespace:         default
Labels:            <none>
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"headless-svc","namespace":"default"},"spec":{"clusterIP":"None","...
Selector:          app=sfs
Type:              ClusterIP
IP:                None
Port:              http  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.50:80,10.244.3.86:80
Session Affinity:  None
Events:            <none>

headless cannot be accessed outside the network because it does not have a cluster IP. If you test your own browser access, you can add and parse the host file of windows

10.244.1.50:80, 10.244.3.86:80, which is not configured here, curl directly on the node

[root@k8s3-1 ~]# curl 10.244.1.50:80
this is 02
[root@k8s3-1 ~]# curl 10.244.3.86:80
this is 04


nfs shared directory settings

[root@zy nfs]# echo "this is 02" > 02/index.html
[root@zy nfs]# echo "this is 03" > 03/index.html
[root@zy nfs]# echo "this is 04" > 04/index.html


At the end of the service and storage test for this statefulset.For statefulset type expansion and scaling, you can use kubectl get pod -. w to view, add new pod-num+1 (maximum num already exists); for scaling, remove pod-num (maximum num already exists), which is not demonstrated here and is interesting to verify.

Tags: Linux Nginx network curl Redis

Posted on Mon, 04 May 2020 12:17:43 -0700 by php_guy