Namespace creation of K8S & & upgrade and rollback of Version (rollback to specified version)

Create a private warehouse.

#Running a registry container
[root@master ~]# docker run -tid --name registry -p 5000:5000 --restart always registry:latest 
#Configure the following on all nodes that need to use a private warehouse:
[root@master ~]# vim /usr/lib/systemd/system/docker.service 
ExecStart=/usr/bin/dockerd -H unix:// --insecure-registry 192.168.20.6:5000
#Modify the above configuration items to specify the listening address and port of the private warehouse
[root@master ~]# systemctl daemon-reload 
[root@master ~]# systemctl restart docker

1) in the master node, define a custom image. Based on the nginx image, the default interface content is changed to Version:v1, version 2 content is Version:v2, and version 3 content is Version:v3

[root@master test]# vim Dockerfile       #Write a dockerfile file
FROM nginx
ADD index.html /usr/share/nginx/html/
[root@master test]# echo "Version:v1" > index.html   #Edit the first page of version 1
[root@master test]# docker build -t 192.168.20.6:5000/ljz:v1 .  #Mirror version 1
#Mirror version 2
[root@master test]# echo "Version:v2" > index.html
[root@master test]# docker build -t 192.168.20.6:5000/ljz:v2 .
#Mirror version 3
[root@master test]# echo "Version:v3" > index.html
[root@master test]# docker build -t 192.168.20.6:5000/ljz:v3 .
#Upload the above three images to the private warehouse
[root@master test]# docker push 192.168.20.6:5000/ljz:v1
[root@master test]# docker push 192.168.20.6:5000/ljz:v2
[root@master test]# docker push 192.168.20.6:5000/ljz:v3

2) create a Namespace. All the next operations are under this Namespace.

[root@master test]# vim ns.yaml      #Write yaml file
apiVersion: v1
kind: Namespace
metadata:
  name: lvjianzhao
[root@master test]# kubectl apply -f ns.yaml     #Run yaml file
namespace/lvjianzhao created
[root@master test]# kubectl get ns lvjianzhao     #View the created namespace.
NAME         STATUS   AGE
lvjianzhao   Active   11s

Create a Deployment resource object. The mirror version is v1.

[root@master test]# vim lvjianzhao.yaml      #Write yaml file

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: lvjianzhao
  namespace: lvjianzhao
spec:
  revisionHistoryLimit: 5           #Specify how many versions to record. This field is obtained by finding the revisionhistorylimit < integer > line through the kubectl explain deploy.spec command
  replicas: 2
  template:
    metadata:
      labels:
        name: lvjianzhaoa
    spec:
      containers:
      - name: lvjianzhao
        image: 192.168.20.6:5000/ljz:v1     #Image version is 1
        ports:
        - containerPort: 80

[root@master test]# kubectl apply -f lvjianzhao.yaml  --record  #Execute the yaml file, - record means to record the version history
[root@master test]# kubectl get pod  #View the pod running in the yaml file above
No resources found.
#It can be concluded that if you specify which namespace you belong to when you write a yaml file,
#Then you can't see the running pod by executing the above command, rather than the non running pod
[root@master test]# kubectl get pod -n lvjianzhao    #Add the "- n" option and specify the namespace to see the corresponding pod
NAME                         READY   STATUS    RESTARTS   AGE
lvjianzhao-865d4b6b6-2mlcj   1/1     Running   0          101s
lvjianzhao-865d4b6b6-7kbnb   1/1     Running   0          101s
[root@master test]# kubectl rollout history deployment -n lvjianzhao lvjianzhao 
#View the deployment resource object named Lvjian in the namespace namespace of Lvjian
deployment.extensions/lvjianzhao 
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=lvjianzhao.yaml --record=true
#You can see that there is currently only one version

3) create a Service resource object and associate it with the above Deployment resource object.

[root@master test]# vim ljz-svc.yaml    #Create yaml file of service

apiVersion: v1
kind: Service
metadata:
  name: lvjianzhao-service
  namespace: lvjianzhao
spec:
  type: NodePort
  selector:
    name: lvjianzhaoa
  ports:
  - name: lvjianzhao-port
    port: 8080    #This is the IP port of the service
    targetPort: 80     #This is the port of the pod
    nodePort: 31111          #This is the port mapped to the host
[root@master test]# kubectl apply -f ljz-svc.yaml   #Execute yaml file
service/lvjianzhao-service created
[root@master test]# kubectl get svc         #Similarly, if you do not specify a namespace, you cannot find the corresponding service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   4d1h
[root@master test]# kubectl get svc -n lvjianzhao   #Use the "- n" option to view service s in the specified namespace
NAME                 TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
lvjianzhao-service   NodePort   10.104.119.94   <none>        8080:31111/TCP   111s

Note: the service resource object created must be in the same namespace as the deployment resource object created, otherwise it cannot be associated!!!

Now the client can access port 31111 of any node in the k8s cluster to access the services provided by its pod, as follows:

If you need to modify the web page file provided by the pod online, you can first view the name of the pod, and then log in to the pod directly on the main node. The command is as follows:

[root@master httpd-web]# kubectl get pod -o wide       #View the name of the pod
NAME                              READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
httpd-devploy1-6f987c9764-5g92w   1/1     Running   0          8m35s   10.244.1.5   node01   <none>           <none>
httpd-devploy1-6f987c9764-wvgft   1/1     Running   0     
[root@master httpd-web]# kubectl exec -it httpd-devploy1-6f987c9764-5g92w /bin/bash    #Enter pod by specifying the name of pod

Now from version 1, scroll to version 2, then to version 3, and finally roll back to the specified version 1 content.

[root@master test]# sed -i 's/ljz:v1/ljz:v2/' lvjianzhao.yaml   #Change to version 2
[root@master test]# kubectl apply -f lvjianzhao.yaml    #implement
deployment.extensions/lvjianzhao configured
[root@master test]# curl 127.0.0.1:31111    #Access validation
Version:v2
[root@master test]# kubectl rollout history deployment -n lvjianzhao lvjianzhao 
#View historical version again
deployment.extensions/lvjianzhao 
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=lvjianzhao.yaml --record=true
2         <none>
#Now there are two historical versions.
#Next upgrade and verify again
[root@master test]# sed -i 's/ljz:v2/ljz:v3/' lvjianzhao.yaml 
[root@master test]# kubectl apply -f lvjianzhao.yaml    
[root@master test]# curl 127.0.0.1:31111     #It's version 3 now
Version:v3
[root@master test]# kubectl rollout history deployment -n lvjianzhao lvjianzhao 
#View historical version information
deployment.extensions/lvjianzhao 
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=lvjianzhao.yaml --record=true
2         <none>
3         <none>
#Now perform the rollback operation:
[root@master test]# kubectl rollout undo deployment -n lvjianzhao lvjianzhao --to-revision=1   
#To roll back to version 1, you need to specify the namespace, "-- to revision" is to specify which version to roll back to
deployment.extensions/lvjianzhao rolled back
[root@master test]# curl 127.0.0.1:31111   #Verification
Version:v1
[root@master test]# kubectl rollout history deployment -n lvjianzhao lvjianzhao 
#Check the history again and find that version 1 has changed to version 4.
deployment.extensions/lvjianzhao 
REVISION  CHANGE-CAUSE
2         <none>
3         <none>
4         kubectl apply --filename=lvjianzhao.yaml --record=true

Tags: Docker vim Nginx curl

Posted on Sat, 09 Nov 2019 08:04:56 -0800 by sitorush