K8s in depth understanding

Introduction to Ingress controller

1. Before inress, pod can only provide external services in the form of NodeIP:NodePort. However, this form has disadvantages, and ports on a node cannot be reused. For example, if a service occupies 80, other services cannot use this PORT.
2.NodePort is a 4-layer agent, unable to resolve the 7-layer http and distinguish traffic by domain name
 3. To solve this problem, we need to use the resource controller called Ingress to provide a unified access. Working on the 7th floor
 4. Although we can use nginx/haproxy to achieve similar effects, traditional deployment cannot dynamically discover our newly created resources, so we must manually modify the configuration file and restart it.
5. For k8s, the main types of ingress controllers are ingress nginx and traifik
 6. Ingress nginx = = nginx + go -- > deployment 
7.traefik has a UI interface 

Install and deploy traifik

1.traefik_dp.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      tolerations:
      - operator: "Exists"
      nodeSelector:
        kubernetes.io/hostname: node1 
      containers:
      - image: traefik:v1.7.17
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8080
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO

2.traefik_rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system

3.traefik_svc.yaml
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
  type: NodePort

4.Application resource configuration
kubectl create -f ./

5.View and access
kubectl -n kube-system get svc 

Creating the rules of inress for the Web UI of traefik

1.analogy nginx:
upstream traefik-ui {
    server traefik-ingress-service:8080;
}

server {
    location / { 
        proxy_pass http://traefik-ui;
        include proxy_params;
    }
}


2.ingress Writing method:
apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  name: traefik-ui
  namespace: kube-system
spec:
  rules:
    - host: traefik.ui.com 
      http:
        paths:
          - path: /
            backend:
              serviceName: traefik-ingress-service 
              servicePort: 8080

3.Access test:
traefik.ui.com

ingress experiment

1.Experimental target
//You can only access through IP + port before using ingress:
tomcat 8080
nginx  8090

//After using ingress, you can directly access it with domain name:
traefik.nginx.com:80   -->  nginx  8090
traefik.tomcat.com:80  -->  tomcat 8080

2.Create 2 pod and svc
mysql-dp.yaml  
mysql-svc.yaml 
tomcat-dp.yaml  
tomcat-svc.yaml

nginx-dp.yaml  
nginx-svc-clusterip.yaml  

3.Establish ingress Controller resource configuration list and Application
cat >nginx-ingress.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  name: traefik-nginx
  namespace: default 
spec:
  rules:
    - host: traefik.nginx.com 
      http:
        paths:
          - path: /
            backend:
              serviceName: nginx-service 
              servicePort: 80
EOF

cat >tomcat-ingress.yaml<<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  name: traefik-tomcat
  namespace: default 
spec:
  rules:
    - host: traefik.tomcat.com 
      http:
        paths:
          - path: /
            backend:
              serviceName: myweb
              servicePort: 8080
EOF

kubectl apply -f nginx-ingress.yaml 
kubectl apply -f tomcat-ingress.yaml 

4.View created resources
kubectl get svc
kubectl get ingresses
kubectl describe ingresses traefik-nginx
kubectl describe ingresses traefik-tomcat

5.Access test
traefik.nginx.com
traefik.tomcat.com

Data persistence

Volume introduction

Volume is a shared directory in Pad that can be accessed by multiple containers
 The life cycle of Volume in Kubernetes is not the same as that of Pad, but the life cycle of non container is not related
 Kubernetes supports multiple types of volumes, and a Pod can use any number of volumes at the same time
 Volume types include:
-EmptyDir: created when Pod is allocated, K8S is automatically allocated, and when Pod is removed, data is cleared. For temporary space, etc.
-hostPath: mount the host directory for the Pod. For persistent data.
-nfs: mount the corresponding disk resources.

EmptyDir experiment

cat >emptyDir.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: busybox-empty
spec:
  containers:
  - name: busybox-pod
    image: busybox
    volumeMounts:
    - mountPath: /data/busybox/
      name: cache-volume
    command: ["/bin/sh","-c","while true;do echo $(date) >> /data/busybox/index.html;sleep 3;done"]
  volumes:
  - name: cache-volume
    emptyDir: {}
EOF

hostPath experiment

1.Problems found:
- Directory must exist to create
- POD Not fixed where it will be created Node On, data is not uniform

2.type Type specification
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

DirectoryOrCreate  Automatically create directory if it doesn't exist
Directory      Directory must exist
FileOrCreate       Create if file does not exist
File           File must exist

3.according to Node Label selection POD Create the Node upper
//Method 1: select the Node name directly
apiVersion: v1
kind: Pod
metadata:
  name: busybox-nodename
spec:
  nodeName: node2
  containers:
  - name: busybox-pod
    image: busybox
    volumeMounts:
    - mountPath: /data/pod/
      name: hostpath-volume
    command: ["/bin/sh","-c","while true;do echo $(date) >> /data/pod/index.html;sleep 3;done"]
  volumes:
  - name: hostpath-volume
    hostPath:
      path: /data/node/
      type: DirectoryOrCreate 


//Method 2: select a Node according to the Node label
kubectl label nodes node3 disktype=SSD

apiVersion: v1
kind: Pod
metadata:
  name: busybox-nodename
spec:
  nodeSelector:
    disktype: SSD
  containers:
  - name: busybox-pod
    image: busybox
    volumeMounts:
    - mountPath: /data/pod/
      name: hostpath-volume
    command: ["/bin/sh","-c","while true;do echo $(date) >> /data/pod/index.html;sleep 3;done"]
  volumes:
  - name: hostpath-volume
    hostPath:
      path: /data/node/
      type: DirectoryOrCreate 


4.Experiment-To write mysql Persistence deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql-dp
  namespace: default
spec:
  selector:
    matchLabels:
      app: mysql 
  replicas: 1
  template: 
    metadata:
      name: mysql-pod
      namespace: default
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql-pod
        image: mysql:5.7 
        ports:
        - name: mysql-port
          containerPort: 3306
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "123456" 
        volumeMounts:
        - mountPath: /var/lib/mysql
          name: mysql-volume
      volumes:
      - name: mysql-volume
        hostPath:
          path: /data/mysql
          type: DirectoryOrCreate 
      nodeSelector:
        disktype: SSD

PV and PVC

1.master Node installation nfs
yum install nfs-utils -y
mkdir /data/nfs-volume -p
vim /etc/exports
/data/nfs-volume 10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
systemctl start rpcbind
systemctl start nfs
showmount -e 127.0.0.1

2.All node Node installation nfs
yum install nfs-utils.x86_64 -y
showmount -e 10.0.0.11

3.Write and create nfs-pv Resources
cat >nfs-pv.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /data/nfs-volume/mysql
    server: 10.0.0.11
EOF

kubectl create -f nfs-pv.yaml
kubectl get persistentvolume

3.Establish mysql-pvc
cat >mysql-pvc.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc 
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
EOF
kubectl create -f mysql-pvc.yaml
kubectl get pvc

4.Establish mysql-deployment
cat >mysql-dp.yaml <<EOF
apiVersion: apps/v1
kind: Deployment 
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        ports:
        - containerPort: 3306
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "123456"
        volumeMounts:
        - name: mysql-pvc
          mountPath: /var/lib/mysql
        - name: mysql-log
          mountPath: /var/log/mysql
      volumes:
      - name: mysql-pvc
        persistentVolumeClaim:
          claimName: mysql-pvc
      - name: mysql-log
        hostPath:
          path: /var/log/mysql
      nodeSelector:
        disktype: SSD
EOF

kubectl create -f mysql-dp.yaml
kubectl get pod -o wide 

5.test method
1.Establish nfs-pv
2.Establish mysql-pvc
3.Establish mysql-deployment And mount mysq-pvc
4.Log on to mysql Of pod Create a database in
5.Take this pod Delete because deployment Set the number of copies, so a new one will be created automatically pod
6.Log in to this new pod´╝îCheck whether the database just created can still be seen
7.If it is still visible, the data is persisted

6.accessModes Field description
ReadWriteOnce Single channel read write
ReadOnlyMany  Multichannel read only
ReadWriteMany Multichannel read and write
resources Resource constraints, such as at least 5 G

7.volumeName Exact matching
#capacity limits storage space
#reclaim policy pv
#Retain PV: the data above is still retained after it is unbound
#Data on recycle pv is released
#Delete PVC and pv are deleted after unbinding
//Note: when users create the storage space needed for pod, they must have pv
//So it does not meet the needs of users automatically, and before k8s 9.0
//Version can also delete pv, which results in data insecurity

configMap resources

1. Why use configMap?
Decouple configuration file and POD

2. How are configuration files stored in congimap?
Key value pair
key:value
 File name: content of profile

3. Configuration types supported by configmap
  Directly defined key value pairs 
  Key value pairs based on file creation

4.configMap creation method
  command line
  Resource allocation list 

5. How to transfer configmap configuration file to POD
  Variable propagation
  Data volume mount

6. Create configMap from the command line
kubectl create configmap --help

kubectl create configmap nginx-config --from-literal=nginx_port=80 --from-literal=server_name=nginx.cookzhang.com

kubectl get cm
kubectl describe cm nginx-config 


7. configMap is referenced in the form of pod environment variable
kubectl explain pod.spec.containers.env.valueFrom.configMapKeyRef

cat >nginx-cm.yaml <<EOF
apiVersion: v1
kind: Pod
metadata: 
  name: nginx-cm
spec:
  containers:
  - name: nginx-pod
    image: nginx:1.14.0
    ports:
    - name: http 
      containerPort: 80
    env:
    - name: NGINX_PORT
      valueFrom:
        configMapKeyRef:
          name: nginx-config
          key: nginx_port
    - name: SERVER_NAME
      valueFrom:
        configMapKeyRef:
          name: nginx-config
          key: server_name 
EOF
kubectl create -f nginx-cm.yaml

8. Check whether the pod introduces variables
[root@node1 ~/confimap]# kubectl exec -it nginx-cm /bin/bash
root@nginx-cm:~# echo ${NGINX_PORT}
80
root@nginx-cm:~# echo ${SERVER_NAME}
nginx.cookzhang.com
root@nginx-cm:~# printenv |egrep "NGINX_PORT|SERVER_NAME"
NGINX_PORT=80
SERVER_NAME=nginx.cookzhang.com

Be careful:
If you change the configuration of confMap in the form of variable passing, it will not take effect in POD
 Because variables only take effect when creating a POD. Once the POD is created, the environment variables will not change


8. Create configMap in file form
 To create a profile:
cat >www.conf <<EOF
server {
        listen       80;
        server_name  www.cookzy.com;
        location / {
            root   /usr/share/nginx/html/www;
            index  index.html index.htm;
        }
    }
EOF

To create a configMap resource:
kubectl create configmap nginx-www --from-file=www.conf=./www.conf 

View cm resources
kubectl get cm
kubectl describe cm nginx-www

Write pod and reference configMap's configuration in storage volume mount mode
cat >nginx-cm-volume.yaml <<EOF
apiVersion: v1
kind: Pod
metadata: 
  name: nginx-cm
spec:
  containers:
  - name: nginx-pod
    image: nginx:1.14.0
    ports:
    - name: http 
      containerPort: 80

    volumeMounts:
    - name: nginx-www
      mountPath: /etc/nginx/conf.d/

  volumes:
  - name: nginx-www
    configMap:
     name: nginx-www
     items: 
     - key: www.conf
       path: www.conf
EOF

Test:
1. Enter the container to view the file
kubectl exec -it nginx-cm /bin/bash
cat /etc/nginx/conf.d/www.conf 
2. Modify configMap dynamically
kubectl edit cm nginx-www

3. Enter the container again to see if the configuration will update automatically
cat /etc/nginx/conf.d/www.conf 
nginx -T

Safety certification and RBAC

API Server is the only access control entry

The operating objects on k8s platform must experience three kinds of safety related operations
 1. Certification operation
  http protocol token authentication token 
  ssl authentication kubectl requires certificate bidirectional authentication
 2. Authorized inspection
  RBAC role based access control 
3. Access control
  To further supplement the authorization mechanism, it is usually supplemented during creation, deletion and agent operation

api accounts of k8s are divided into two categories
  1. Real user human user userAccount
  2. By default, each POD has serious information

RBAC is about access control of roles
  What permissions can you have for this account
  
Take traefik for example:
1. Create the account serviceaccount: traifik ingress controller
 2. Create the role clusterrole: traifik ingress controller
  Role pod related permissions
  ClusterRole namespace level operation 
3. Bind the account and authority role to the tracefik ingress controller
  RoleBinding
  ClusterRoleBinding
 4. Reference ServiceAccount when creating POD
  serviceAccountName: traefik-ingress-controller


be careful!!!
For k8s cluster installed by kubeadm, the certificate is only one year by default

k8s dashboard

1.Official project address
https://github.com/kubernetes/dashboard

2.Download profile
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml

3.Modify profile
 39 spec:
 40   type: NodePort
 41   ports:
 42     - port: 443
 43       targetPort: 8443
 44       nodePort: 30000

4.Application resource configuration
kubectl create -f recommended.yaml

5.Create administrator account and apply
cat > dashboard-admin.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF
kubectl create -f dashboard-admin.yaml

6.View resources and get token
kubectl get pod -n kubernetes-dashboard -o wide
kubectl get svc -n kubernetes-dashboard
kubectl get secret  -n kubernetes-dashboard
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

7.Browser access
https://10.0.0.11:30000
google If you can't open the browser, change to Firefox
//Black Technology 
this is unsafe

Research direction

0.namespace
1.ServiceAccount
2.Service
3.Secret
4.configMap
5.RBAC
6.Deployment

Restarting k8s binary installation (kubeadm) requires restarting components

1.kube-apiserver
2.kube-proxy
3.kube-sechduler
4.kube-controller
5.etcd
6.coredns
7.flannel
8.traefik
9.docker
10.kubelet

Tags: Linux Nginx MySQL Kubernetes Tomcat

Posted on Sun, 08 Mar 2020 04:36:05 -0700 by celavi