Command line for how k8s resources are created & resource list (yaml)

Command Line Resource Creation Basic Action Commands

//Create a deployment resource object.(pod controller)
[root@master ~]# kubectl  run test --image=httpd --port=80 --replicas=2 
//Delete controller:
[root@master ~]# kubectl  delete deployments. test
//View deployment resource object 
[root@master ~]# kubectl get deployments.
//See which node the pod runs on
[root@master ~]# kubectl get pod -o wide
//View resource objects mapped by service
[root@master ~]# kubectl get svc
//View the details of a resource.
[root@master ~]# kubectl  describe svc test 
//Delete resource objects:
[root@master ~]#kubectl delete services test
//View the details of the deployment resource:
[root@master ~]# kubectl  describe  deployments. test-web
//View the details of a pod:
[root@master ~]# kubectl  describe  pod test-web-8697566669-52tq
//View replicas details
[root@master ~]# kubectl get replicasets.
replicas: and controller manager Is a controller
//Edit a resource object (service, pod, namespace can be edited):
[root@master ~]# kubectl  edit  deployments. test-web  

//Import (convert) the information in the output json format into yaml text: (and vice versa)
[root@master ~]#  kubectl  get  deployments. test-web -o json > test2.yaml

Service scaling and expansion

Method 1: The command line:

1)Create a deployment Resource objects:
[root@master ~]# kubectl run test --image=httpd --port=80 --replicas=2 
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/test created
//There will be a normal reminder (as shown above) that this deployment controller will be removed in a future release and replaced by a pod.
[root@master ~]# kubectl  get deployments.  -o wide  
NAME   READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
test   2/2     2            2           68s   test         httpd    run=test

2) Conduct capacity expansion operations

//Expand the replica size of the resource object to four:
[root@master ~]# kubectl scale deployment test --replicas=4
deployment.extensions/test scaled

//See if the expansion was successful:

[root@master ~]# kubectl  get deployments. -o wide
NAME   READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES   SELECTOR
test   4/4     4            4           3m40s   test         httpd    run=test

3) Conduct shrinking operation (same as expansion, reduce)

//Shrink copies of resource objects to three:
[root@master ~]# kubectl  scale deployment  test --replicas=3
deployment.extensions/test scaled
[root@master ~]# kubectl  get deployments. -o wide
NAME   READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES   SELECTOR
test   3/3     3            3           7m14s   test         httpd    run=test

Method 2: You can also use the edit editor:

//Expand the number of copies of this deployment to four:
[root@master ~]# kubectl  edit deployments. test 

//Check the number of copies again and expand successfully:
[root@master ~]# kubectl  get deployments. -o wide
NAME   READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
test   4/4     4            4           11m   test         httpd    run=test

Update and rollback of services

1) Build a registry private warehouse and upload custom images:
The process of building a private warehouse is slight, you can refer to the previous blog Click on the link.

//Mirror Rename:
[root@master ~]# docker tag nginx:latest 172.16.1.30:5000/nginx:v1.0
[root@master ~]# docker tag nginx:latest 172.16.1.30:5000/nginx:v2.0
[root@master ~]# docker tag nginx:latest 172.16.1.30:5000/nginx:v3.0
//Upload mirror:
[root@master ~]# docker push 172.16.1.30:5000/nginx:v1.0 
[root@master ~]# docker push 172.16.1.30:5000/nginx:v2.0 
[root@master ~]# docker push 172.16.1.30:5000/nginx:v3.0 

2) Create a deployment:
[root@master ~]# kubectl run mynginx --image=172.16.1.30:5000/nginx:v1.0 --replicas=4

//View the mirror version information:
[root@master ~]# kubectl  get deployments. -o wide
NAME      READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                        SELECTOR
mynginx   4/4     4            4           4m51s   mynginx      172.16.1.30:5000/nginx:v1.0   run=mynginx

##If the pod is not working properly, troubleshooting ideas:

1,adopt describe Command to view details.
[root@master ~]# kubectl  describe pod bdqn-web-7ff466c8f5-p6wcw 
2,By Viewing kubelet Log information.
 [root@master ~]# cat /var/log/messages | grep kubelet

Update the mirror version of the service

//Update mirror to nginx:v2.0
[root@master ~]# kubectl set image deployment mynginx mynginx=172.16.1.30:5000/nginx:v2.0
deployment.extensions/mynginx image updated
//Check to see if the update was successful:
[root@master ~]# kubectl  get deployments. -o wide
NAME      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                        SELECTOR
mynginx   4/4     4            4           11m   mynginx      172.16.1.30:5000/nginx:v2.0   run=mynginx

You can see that the mirror has been updated successfully.

Method 2 of ################### can also be modified through the edit editor:

//Update the mirror version to nginx:v3.0
[root@master ~]# kubectl  edit  deployments. mynginx 

//Save and exit (as vim editor does), and view the mirror version:
[root@master ~]# kubectl  get deployments. -o wide
NAME      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                        SELECTOR
mynginx   4/4     4            4           16m   mynginx      172.16.1.30:5000/nginx:v3.0   run=mynginx

Rollback mirror operation

//Perform a rollback operation:
[root@master ~]# kubectl  rollout  undo deployment  mynginx 
deployment.extensions/mynginx rolled back
[root@master ~]# kubectl  get deployments. -o wide  #You can see that the rollback was successful
NAME      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                        SELECTOR
mynginx   4/4     4            4           18m   mynginx      172.16.1.30:5000/nginx:v2.0   run=mynginx
//Perform the rollback operation twice:
[root@master ~]# kubectl  rollout  undo deployment  mynginx 
deployment.extensions/mynginx rolled back
//View the rolled-back mirror version:
[root@master ~]# kubectl  get deployments. -o wide
NAME      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                        SELECTOR
mynginx   4/4     4            4           20m   mynginx      172.16.1.30:5000/nginx:v3.0   run=mynginx

It is clear that the default rollback operation for k8s is the same as for the docker swarm cluster, which only rolls back between the first and last versions.

A list of resources (yaml) for how resources are created

To create a resource list, you must know and remember the following first-level fields that must be written:

  • apiVersion: version of api
  • Type: The type of resource object to be created
  • metadata: metadata.Where the name field is a required field.
  • spec: Describes the desired state of the user.The container and image fields are mandatory, container------>image.
  • status: The current state of the pod.(automatically generated as the pod container runs)
1,See api All versions: (Each version has its own features, and of course you can add your own version)
[root@master ~]# kubectl  api-versions

admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1

2, a tool to help you write yaml files (explain):
Be familiar with it. It's very useful.

//For example, to create a pod, you need to use those fields and their corresponding versions and tips:
[root@master ~]# kubectl  explain  pod 

//See how the metadata field for help deployment is written:
[root@master ~]# kubectl  explain  deploy.metadata  #Some resource object names can be abbreviated or completed


It gives corresponding subfields, such as name name name, namespace namespace, and so on.
3, let's write a simple yaml file to deploy nginx
Tip: Be careful with formatting (indentation) when writing yaml files
[root@master yaml]# vim nginx.yaml #Note that you want to end with.yaml

kind: Deployment        #Type is deployment
apiVersion: extensions/v1beta1   #The corresponding version is v1beta1
metadata:
  name: nginx-deploy     #Define the name of the resource object
spec:
  replicas: 2                   #Number of copies is 2
  template:                  #Define Template
    metadata:              
      labels:                  #Define a label in the template that will be used for later connection service s
        app: web-server     
    spec:
      containers:
      - name: nginx        #Define pod name (custom)
        image: nginx     #Specify Mirror
//Run the yaml file: (There are two ways)
[root@master yaml]# kubectl apply -f  nginx.yaml    #This method is recommended
deployment.extensions/nginx-deploy created

Or:
[root@master yaml]# kubectl create nginx.yaml

//See if the pod ran successfully:
[root@master yaml]# kubectl  get pod 
NAME                            READY   STATUS    RESTARTS   AGE
nginx-deploy-56558c8dc7-pjdkk   1/1     Running   0          117s
nginx-deploy-56558c8dc7-rxbpb   1/1     Running   0          117s
//If you need to delete a resource object through a yaml file:
[root@master yaml]# kubectl  delete -f  nginx.yaml 
deployment.extensions "nginx-deploy" deleted

This method is also common and does not need to delete pod s one by one manually.

4, create a service resource object to associate the deployment described above
The primary purpose of a service is to provide a unified interface for accessing the service.The k8s cluster maintains the mapping relationship between service and endpoint.

//Write a yaml file for the service:
[root@master yaml]# vim nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  type: NodePort  #Define service as nodeport
  selector:          #Label selector, used to associate deployment
    app: web-server     #Note that this label must be consistent with the deployment label or cannot be associated with the deployment    
  ports:         #Define Port
  - protocol: TCP     #Protocol is TCP
    port: 8080            #Define the port corresponding to cluster ip
    targetPort: 80    #Define ports within containers
        nodePort: 30000    #Ports Exposed to External Networks

//Execute yaml file:
[root@master yaml]# kubectl apply -f nginx-svc.yaml
service/nginx-svc created

//View service information:

Explanation:
service defaults to Cluster ip type.Cluster ip only supports access within the cluster, and each node within the cluster can access each other through this ip address.
The type of nodeport is the port exposed to the external network, which can be accessed through the host's ip address + mapped port.

//Test access to nginx page:

Set master node to work:

We know in the k8s architecture that master nodes in a cluster are not working by default. What should I do if master is required to work?

1)Execute the following command settings to work:
[root@master yaml]# kubectl  taint node master node-role.kubernetes.io/master-
node/master untainted
2)Modify the above yaml File, change the number of copies to 3.
[root@master yaml]# vim nginx.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nginx-deploy
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: web-server
    spec:
      containers:
      - name: nginx
        image: nginx
3)Re-execute yaml Files:
[root@master yaml]# kubectl  apply -f  nginx.yaml 
deployment.extensions/nginx-deploy configured
4)Verification pod Will be assigned to master: (By default, it will not be assigned to master Of)
[root@master yaml]# kubectl  get pod -o wide


You can see that each node in the cluster is evenly allocated to achieve load balancing.

Restore master node (not working)

[root@master yaml]# kubectl  taint node master node-role.kubernetes.io/master="":NoSchedule
node/master tainted
//Rerun the yaml file to see if the pod is still assigned to master
[root@master yaml]# kubectl  delete -f  nginx.yaml 
deployment.extensions "nginx-deploy" deleted
[root@master yaml]# kubectl apply -f  nginx.yaml 
deployment.extensions/nginx-deploy created

Note: The pod above me is already a new pod. Like docker swarm, the pod originally assigned to master still exists, but the resulting pod is randomly assigned to other nodes.

Specify the node location where the pod will run

We know that pods are randomly assigned to nodes through a kube-proxy component, but what if you want to specify on which node the pod runs?
Like docker swarm, we can do this by tagging nodes.

1)//Define labels:
[root@master yaml]# kubectl  label nodes node01 test=123  #Label Customization
node/node01 labeled
//If you want to delete tags:
[root@master yaml]# kubectl  label nodes node01 disktype-
2)//Verify node labels and display their label status:
[root@master yaml]# kubectl  get nodes  --show-labels 


Let's look at node01 to see the label we just defined.

3)//Modify the yaml file:
[root@master yaml]# vim nginx.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nginx-deploy
spec:
  replicas: 6
  template:
    metadata:
      labels:
        app: web-server
    spec:
      containers:
      - name: nginx
        image: nginx
      nodeSelector:     #Add Node Selector
        test: '123'        #Specify the label you just customized, and if there are numbers, enclose them in single or double quotes
//Re-execute the yaml file:
[root@master yaml]# kubectl apply -f  nginx.yaml 
deployment.extensions/nginx-deploy configured
4)//See if the pod will run on the specified node01:
[root@master yaml]# kubectl  get pod -o wide 


Note: The pod that was previously running on this node will be replaced.

_________

Tags: Nginx Docker vim JSON

Posted on Mon, 02 Dec 2019 04:15:34 -0800 by freddykhalid