pod Health Check Details (liveness, readiness, rolling update)

Environmental introduction

Host IP Address service
master 192.168.1.21 k8s+httpd+nginx
node01 192.168.1.22 k8s
node02 192.168.1.23 k8s

Based on [ https://blog.51cto.com/14320361/2464655 The experiment continues

1. Pod's liveness and readiness probes

Kubelet uses a live probe to determine when to restart the container.For example, when an application is running but cannot be further manipulated, the liveness probe captures deadlock, restarts the container in that state, and allows the application to continue running with bug s
Kubelet uses readiness probe to determine if the container is ready to accept flow.Kubelet will only determine that a Pod is ready if all containers in the Pod are ready.The purpose of this signal is to control which Pods should serve as backends.If Pods are not ready, they will be removed from the service's load balancer.

Probe supports three checking methods:

<1>exec-command

A command is executed once in the user container, and if the exit code of the command is 0, the application is considered to be functioning properly and the other task applications are not functioning properly.

  livenessProbe:
    exec:
      command:
      - cat
      - /home/laizy/test/hostpath/healthy

<2>TCPSocket

An attempt will be made to open a Socket connection (that is, IP address: port) to a user container.If this connection can be established, the application will be considered to be functioning properly, otherwise the application will not be functioning properly.

livenessProbe:
tcpSocket:
   port: 8080

<3>HTTPGet

Call the web hook of the Web application in the container, and if the HTTP status code returned is between 200 and 399, the application is considered to be functioning properly, otherwise the application is considered not functioning properly.Each HTTP health check accesses the specified URL.

  httpGet: #If you check your health by httpget and return between 200 and 399, the container is considered normal
    path: / #URI Address
    port: 80 #Port number
    #host: 127.0.0.1 #Host Address
    scheme: HTTP #Supported protocols, http or https
  httpHeaders: '' #header for custom requests

Parameter Description

initialDelaySeconds: The number of seconds to wait for the first probe to execute after the container starts.

periodSeconds: The frequency at which the probe is performed.Default is 10 seconds, minimum 1 second.

timeoutSeconds: Probe timeout.Default 1 second, minimum 1 second.

SuccThreshold: The minimum number of successive probes that are considered successful after a probe fails.The default is 1.For liveness, it must be 1.The minimum value is 1.

The results of probe detection are one of three:

Success:Container passed the check.
Failure: Container failed the check.
Unknown: The check was not performed, so no action was taken.

1. LivenessProbe

(1) Write a yaml file of livenss

[root@node02 ~]# vim livenss.yaml
kind: Pod
apiVersion: v1
metadata:
  name: liveness
  labels:
    test: liveness
spec:
  restartPolicy: OnFailure
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/test; sleep 60; rm -rf /tmp/test; sleep 300
    livenessProbe:              #Survival Detection
      exec:                     #Check if the service is working by executing commands
        command:                #Command mode
        - cat
        - /tmp/test
      initialDelaySeconds: 10    #The pod starts probing 10 seconds after it runs
      periodSeconds: 5           #Frequency of the check, detected every 5 seconds

This profile configures a container for Pod.periodSeconds specifies that kubelet executes a liveness probe every five seconds.initialDelaySeconds tells kubelet to wait 10 seconds before executing the probe for the first time.The probe detection command is to execute the cat/tmp/health command in a container.If the command executes successfully, it returns 0, and kubelet assumes that the container is alive and healthy.If a non-zero value is returned, kubelet will kill the container and restart it.

(2) Run once

[root@master ~]# kubectl apply -f liveness.yaml 

(3) Check it out

[root@master ~]# kubectl get pod -w

Liveness Activity Detection, which detects the existence of a file to determine whether a service is functioning properly and, if it exists, is responsible for operating Pod according to the restart policy you set for Pod.

2. Readiness (Sensitive, Ready)

ReadinessProbe probes are used in slightly different scenarios. Sometimes applications may not be able to accept requests temporarily, such as Pod has been Running, but the in-container application has not yet started successfully. In this case, if there is no ReadinessProbe, Kubernetes thinks it can handle requests, but at this time, we know if the program has not started successfully or notThe ReadinessProbe probe is used when a user's request can be received, so you don't want kubernetes to schedule the request to it.
ReadinessProbe and livenessProbe can use the same probing method, but they handle Pod differently. ReadinessProbe removes Pod IP:Port from the corresponding EndPoint list, while livenessProbe uses the Kill container and decides what to do based on Pod's restart policy.
ReadinessProbe probes detect if the container is ready, and if not, kubernetes will not forward traffic to this Pod.
Like Liveness Probe, ReadinessProbe supports the detection of exec, httpGet, TCP in the same configuration, except that the Liveness Probe field is modified to ReadinessProbe.

(1) Write a readiness yaml file

[root@master ~]# vim readiness.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: readiness
  labels:
    test: readiness
spec:
  restartPolicy: Never
  containers:
  - name: readiness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/test; sleep 60; rm -rf /tmp/test; sleep 300
    readinessProbe:
      exec:
        command:
        - cat
        - /tmp/test
      initialDelaySeconds: 10
      periodSeconds: 5

(2) Run once

[root@master ~]# kubectl apply -f readiness.yaml 

(3) Check it out

[root@master ~]# kubectl get pod -w

3. Summarize liveness and readiness detection

(1) liveness and readiness are two health checking mechanisms, and k8s uses the same default behavior for both probes, i.e. by determining whether the return value of the container startup process is zero, to determine whether the detection is successful.

(2) The two detection configuration methods are identical, but the difference is the behavior after the detection fails.

liveness probing operates on containers according to a restart policy, most of which are restart containers.

readiness sets the container unavailable and does not receive requests forwarded by the Service.

(3) It is suggested that the two detection methods exist independently or simultaneously.Use Livenses to determine whether a restart is necessary to achieve self-healing, and readiness to determine if the container is ready to serve.

2. Application of Testing

1. Application in scale.

(1) Write a readiness yaml file

[root@master ~]# vim hcscal.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: web
spec:
  replicas: 3
  template: 
    metadata:
      labels:
        run: web
    spec:
      containers:
      - name: web
        image: httpd
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            scheme: HTTP   #Detected Protocols
            path: /healthy  #Directory Accessed
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 5

---
kind: Service
apiVersion: v1
metadata:
  name: web-svc
spec:
  type: NodePort
  selector:
    run: web
  ports:
  - protocol: TCP
    port: 90
    targetPort: 80
    nodePort: 30321

In the configuration file, a Pod is created using httpd mirroring, where the periodSeconds field specifies that kubelet performs a probe every 5 seconds, and the initialDelaySeconds field tells kubelet to wait 10 seconds delay by sending an HTTP GET request to a service running in a container, requesting / healthz under port 8080, and any code greater than or equal to 200 and less than 400 indicates success.Any other code indicates failure.

(2) The httpGet detection method has the following optional control fields

Host: The host name to connect to, which defaults to Pod IP, can have the host header set in the HTTP request header.
scheme: The protocol used to connect to host, defaulting to HTTP.
path: The access URI on the http server.
httpHeaders: Customize the HTTP request headers, which allow duplicate headers.
Port: The port number or name to access on the container.

(3) Run once

[root@master ~]# kubectl apply -f readiness.yaml 

(4) Check it out

[root@master ~]# kubectl get pod -w

[root@master ~]# kubectl get pod -o wide

[root@master ~]# kubectl get service -o wide

(5) Visit

[root@master ~]# curl  10.244.1.21/healthy

(6) pod creates a file in the specified directory

[root@master ~]# kubectl exec web-69d659f974-7s9bc touch /usr/local/apache2/htdocs/healthy

(7) Check it out

[root@master ~]# kubectl get pod -w

2. Use in the update process

(1) Write a readiness yaml file

[root@master ~]# vim app.v1.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app
spec:
  replicas: 10
  template:
    metadata:
      labels:
        run: app
    spec:
      containers:
      - name: app
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 10; touch /tmp/healthy; sleep 3000
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

(2) Run it and record version information

[root@master ~]# kubectl apply -f readiness.yaml --record 
Check it out
[root@master ~]# kubectl rollout history deployment app 

(3) Check it out

[root@master ~]# kubectl get pod -w

3. Upgrade Deployment

(1) Write a readiness yaml file

[root@master ~]# cp app.v1.yaml app.v2.yaml 
[root@master ~]# vim app.v2.yaml 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app
spec:
  replicas: 10
  template:
    metadata:
      labels:
        run: app
    spec:
      containers:
      - name: app
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 3000        #Modify Command
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

(2) Run it and record version information

[root@master ~]# kubectl apply -f readiness.yaml --record 
Check it out
[root@master ~]# kubectl rollout history deployment app 

(3) Check it out

[root@master ~]# kubectl get pod -w

(4) Upgrade deployment again

<1>Write a readiness yaml file
[root@master ~]# cp app.v1.yaml app.v3.yaml 
[root@master ~]# vim app.v2.yaml 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app
spec:
  replicas: 10
  template:
    metadata:
      labels:
        run: app
    spec:
      containers:
      - name: app
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 3000        #Modify Command
<2>Run and record version information
[root@master ~]# kubectl apply -f readiness.yaml --record 
Check it out
[root@master ~]# kubectl rollout history deployment app 

<3>Check it out
[root@master ~]# kubectl get pod -w

4. Rollback v2 version

[root@master ~]# kubectl rollout undo deployment app --to-revision=2

Check it out

[root@master ~]# kubectl get pod

(1) Write a readiness yaml file

[root@master ~]# vim app.v2.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app
spec:
  strategy:
    rollingUpdate:
      maxSurge: 2
      maxUnavailable: 2
  replicas: 10
  template:
    metadata:
      labels:
        run: app
    spec:
      containers:
      - name: app
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 3000
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

Parameter introduction

minReadySeconds:

Kubernetes does not upgrade until it waits for the set time
If this value is not set, Kubernetes assumes that the container will be serviced when it is started
If this value is not set, in some extreme cases it may cause the service to function properly

maxSurge:

The maximum number of POD s that can be set up during the upgrade process
For example: maxSurage=1, replicas=5, means that Kubernetes will start a new Pod before deleting an old POD, which will have a maximum of 5+1 POD during the entire upgrade process.

maxUnavaible:

Up to how many POD s are not serviceable during the upgrade process
When maxSurge is not zero, the value cannot be zero
For example, maxUnavable=1 means that up to one POD will be unserviceable throughout the upgrade of Kubernetes.

(2) Run it and record version information

[root@master ~]# kubectl apply -f app.v2.yaml --record 
Check it out
[root@master ~]# kubectl rollout history deployment app 

(3) Check it out

[root@master ~]# kubectl get pod -w

3. Small Experiments

1) Write a Deployment resource object, requiring two copies, nginx mirror.Use Readiness probing to determine if the custom file/test exists and start probing 10 seconds after the container is opened, with an interval of 10 seconds.

(1) Write a readiness yaml file
[root@master yaml]# vim nginx.yaml

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: web
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: web
    spec:
      containers:
      - name: readiness
        image: 192.168.1.21:5000/nginx:v1
        readinessProbe:
          exec:
            command:
            - cat
            - /usr/share/nginx/html/test
          initialDelaySeconds: 10
          periodSeconds: 10
(2) Run it and record version information
[root@master ~]# kubectl apply -f nginx.yaml --record 
Check it out
[root@master ~]# kubectl rollout history deployment web 

(3) Check it out
[root@master ~]# kubectl get pod -w

2) In the two Pods after running, enter a Pod and create a file/test.

[root@master yaml]# kubectl exec -it web-864c7cf7fc-gpxq4  /bin/bash
root@web-68444bff8-xm22z:/# touch /usr/share/nginx/html/test

Check it out

[root@master yaml]# kubectl get pod -w

3) Create a Service resource object to associate with the above Deployment, and after running, review the Service resource details to confirm the EndPoint Load Balancing Backend Pod.

(1) Write yaml file for service

[root@master yaml]# vim nginx-svc.yaml
kind: Service
apiVersion: v1
metadata:
  name: web-svc
spec:
  type: NodePort
  selector:
    run: web
  ports:
  - protocol: TCP
    port: 90
    targetPort: 80
    nodePort: 30321

(2) Execute once

[root@master yaml]# kubectl apply -f nginx-svc.yaml 

(3) Just change pages for two pod s

Check out the pod
[root@master yaml]# kubectl get pod -o wide
Change Page
[root@master yaml]# kubectl exec -it  web-864c7cf7fc-gpxq4  /bin/bash
root@web-864c7cf7fc-gpxq4:/# echo "123">/usr/share/nginx/html/test
root@web-864c7cf7fc-gpxq4:/# exit

[root@master yaml]# kubectl exec -it  web-864c7cf7fc-pcrs9   /bin/bash
root@web-864c7cf7fc-pcrs9:/# echo "321">/usr/share/nginx/html/test
root@web-864c7cf7fc-pcrs9:/# exit

4) After observing the status, try writing another Pod to the / test file, and then go to see the load balance of the EndPoint corresponding to the SVC.

(1) Check out the service

[root@master yaml]# kubectl get service

(2) Visit

[root@master ~]# curl 192.168.1.21:30321/test

5) Rerun the deployment resource through the detection method of httpGet, summarizing and comparing the two detection methods of Readines.

(1) Modify the deployment yaml file

[root@master yaml]# vim nginx.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: web
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: web
    spec:
      containers:
      - name: readiness
        image: 192.168.1.21:5000/nginx:v1
        readinessProbe:
          httpGet:
            scheme: HTTP
            path: /usr/share/nginx/html/test
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 10

(2) Execute once

[root@master yaml]# kubectl apply -f nginx.yaml 

(3) Check out the pod

[root@master yaml]# kubectl get pod -w

maxSurge: This parameter controls the value that the total number of copies exceeds the expected number during rolling updates.It can be an integer or a percentage, defaulting to 1.So now there are three pod s

(4) Visit

[root@master yaml]# curl 192.168.1.21:30321/test

6) Summarize the similarities and differences between liveness and readiness probes, and their usage scenarios.

<1>Core differences between readiness and liveness

Readiness and liveness are literally meaningful.Readiness means whether it is accessible or not, and liveness means whether it is alive.If a readiness is fail and the change pod ip is deleted from the endpoint s of all services of the pod, it means that all services corresponding to the pod will not transfer the request to the pod.But if liveness checks for fail it will kill the container directly, and of course if your restart policy is always it will restart the pod.

<2>What is a readiness/liveness detection failure?

In fact, k8s provides three detection tools.

http get returns 200-400 for success, others for failure
tcp socket The tcp port you specified is open, such as on telnet
cmd exec executes a command in a container and returns 0 successfully.
Each of these can be defined in readiness or liveness.For example, defining http get in readiness means that if my http get for this path requests to return HTTP codes other than 200-400, it will remove me from all services that have me. If it is defined in liveness, it will kill me.

<3>Readiness and readiness usage environment

For example, if an http service you want to restart the container once it has access problems.Then you define a liveness detection tool as http get.On the other hand, if there is a problem, I don't want to restart it, I just want to remove it and don't let the request come to it.Configure readiness.

Note that liveness does not restart pods, and whether pods restart is controlled by your restart policy.

Reference resources:
https://www.jianshu.com/p/16a375199cf2

Tags: Nginx vim kubelet Kubernetes

Posted on Mon, 13 Jan 2020 11:47:39 -0800 by releasedj