Docker——k8s cluster construction

I. Knowledge Analysis

  1. brief introduction
  • Official Chinese Documents: https://www.kubernetes.org.cn/docs

  • Kubernetes is an open source application for managing containerized applications on multiple hosts in cloud platforms. The goal of Kubernetes is to make deploying containerized applications simple and efficient. Kubernetes provides a mechanism for deploying, planning, updating and maintaining applications.

  • One of the core features of Kubernetes is that it can manage containers independently to ensure that containers in cloud platforms run according to users'expectations (for example, users want apache to run all the time, users don't need to care about how to do it, Kubernetes will automatically monitor, then restart, build, in short, let apache always provide services), administrators can load a micro-service. Let the planner find the right place. At the same time, Kubernetes also improves the tools and humanization aspects of the system, so that users can easily deploy their own applications.

  1. Composition of Kubernetes
  • Kubernetes nodes have the services necessary to run application containers, which are controlled by Master. Docker runs on each node. Docker is responsible for all specific image downloads and container runs.
  • Kubernetes consists of the following core components:
  1. etcd: saves the state of the whole cluster;

  2. apiserver: provides the only access to resource operations, and provides authentication, authorization, access control, API registration and discovery mechanisms;

  3. controller manager: responsible for maintaining the status of the cluster, such as fault detection, automatic expansion, rolling updates, etc.

  4. Schduler: Responsible for resource scheduling, scheduling Pod to the corresponding machine according to the scheduled scheduling strategy;

  5. kubelet: Responsible for maintaining the life cycle of containers, but also responsible for Volume (CVI) and network (CNI) management;

  6. Container runtime: Responsible for image management and the real operation of Pod and container (CRI);

  7. kube-proxy: responsible for providing Service discovery and load balancing within cluster for Service;
    In addition to the core components, there are also some recommended Adds-ons:

  8. kube-dns: Responsible for providing DNS services for the entire cluster

  9. Ingress Controller: Providing Extranet Access for Services

  10. Heapster: Providing resource monitoring

  11. Dashboard: Provide GUI

  12. Federation: Clusters that provide cross-available zones

  13. Fluentd-elastic search: Provides cluster log collection, storage and query

2. Kubernetes Cluster Construction

  • Experimental environment: (install docker and open)
    docker1: 172.25.79.1 (k8s-master)
    docker2: 172.25.79.2 (k8s-node1)

  • Note: This experiment needs networking!!!

  • Clean up the previous environment: (swarm cluster has been configured before, and readers who have not been configured can skip it)

[root@docker2 ~]# docker swarm leave
Node left the swarm.
[root@docker3 ~]# docker swarm leave
Node left the swarm.
[root@docker1 ~]# docker swarm leave --force

[root@docker1 ~]# docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
[root@docker2 ~]# docker container prune
[root@docker3 ~]# docker container prune
  1. Install corresponding software
[root@docker1 mnt]# yum install -y kubeadm-1.12.2-0.x86_64.rpm kubelet-1.12.2-0.x86_64.rpm kubectl-1.12.2-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm cri-tools-1.12.0-0.x86_64.rpm 
[root@docker2 mnt]# yum install -y kubeadm-1.12.2-0.x86_64.rpm kubelet-1.12.2-0.x86_64.rpm kubectl-1.12.2-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm cri-tools-1.12.0-0.x86_64.rpm 
  1. Shut down the switching partition of the system
[root@docker1 ~]# swapoff -a
[root@docker1 mnt]# vim /etc/fstab
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@docker1 ~]# systemctl enable kubelet.service

##2Ditto


3. View the mirrors that kubeadm will use

[root@docker1 ~]# kubeadm config images list
I0323 16:49:02.001547   11145 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0323 16:49:02.001631   11145 version.go:94] falling back to the local client version: v1.12.2
k8s.gcr.io/kube-apiserver:v1.12.2
k8s.gcr.io/kube-controller-manager:v1.12.2
k8s.gcr.io/kube-scheduler:v1.12.2
k8s.gcr.io/kube-proxy:v1.12.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2
  1. Import the required image
[root@docker1 mnt]# docker load -i kube-apiserver.tar
[root@docker1 mnt]# docker load -i kube-controller-manager.tar
[root@docker1 mnt]# docker load -i kube-proxy.tar
[root@docker1 mnt]# docker load -i pause.tar
[root@docker1 mnt]# docker load -i etcd.tar
[root@docker1 mnt]# docker load -i coredns.tar
[root@docker1 mnt]# docker load -i kube-scheduler.tar
[root@docker1 mnt]# docker load -i flannel.tar
  1. Initialization
[root@docker1 mnt]# vim kube-flannel.yml 
 76       "Network": "10.244.0.0/16"
[root@docker1 mnt]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.25.19.1

##Configure by executing the following commands (note that these three commands must be executed by k8s users)
[root@docker1 mnt]#  su - k8s
[k8s@docker1 ~]$ mkdir -p $HOME/.kube
[k8s@docker1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[k8s@docker1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

  1. Create a k8s user, authorize, and set environment variables
[root@docker1 mnt]# useradd k8s
[root@docker1 mnt]# vim /etc/sudoers
 92 k8s     ALL=(ALL)       NOPASSWD:ALL
[root@docker1 mnt]# vim /home/k8s/.bashrc 
source <(kubectl completion bash)'
[root@docker1 mnt]#  su - k8s
[root@docker1 mnt]# yum install bash-* -y
[k8s@docker1 ~]$ kubectl   ##Just make up for it.

  1. Deploy flannel in master
[root@docker1 mnt]# cp  kube-flannel.yml  /home/k8s
[root@docker1 mnt]# su - k8s
[k8s@docker1 ~]$ kubectl apply -f kube-flannel.yml 
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

[k8s@docker1 ~]$ sudo docker ps   ##See
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                NAMES
de6708eb4742        95b66263fd52           "/coredns -conf /etc…"   6 seconds ago       Up 4 seconds                             k8s_coredns_coredns-576cbf47c7-bx8cl_kube-system_537756d8-4d4a-11e9-a4a7-5254004f4b70_0
50b03df7ee97        95b66263fd52           "/coredns -conf /etc…"   6 seconds ago       Up 4 seconds                             k8s_coredns_coredns-576cbf47c7-r9628_kube-system_53b786e2-4d4a-11e9-a4a7-5254004f4b70_0
ed3556e9a5e5        k8s.gcr.io/pause:3.1   "/pause"                 17 seconds ago      Up 9 seconds                             k8s_POD_coredns-576cbf47c7-bx8cl_kube-system_537756d8-4d4a-11e9-a4a7-5254004f4b70_0
b5da0cf05f76        k8s.gcr.io/pause:3.1   "/pause"                 17 seconds ago      Up 9 seconds                             k8s_POD_coredns-576cbf47c7-r9628_kube-system_53b786e2-4d4a-11e9-a4a7-5254004f4b70_0
1a23638d94c4        f0fad859c909           "/opt/bin/flanneld -…"   37 seconds ago      Up 36 seconds                            k8s_kube-flannel_kube-flannel-ds-amd64-z2x9x_kube-system_818ae80a-4d4e-11e9-a4a7-5254004f4b70_0
474053984c3e        k8s.gcr.io/pause:3.1   "/pause"                 45 seconds ago      Up 41 seconds                            k8s_POD_kube-flannel-ds-amd64-z2x9x_kube-system_818ae80a-4d4e-11e9-a4a7-5254004f4b70_0
b0235d34e1f0        96eaf5076bfe           "/usr/local/bin/kube…"   30 minutes ago      Up 30 minutes                            k8s_kube-proxy_kube-proxy-4654x_kube-system_535e2a50-4d4a-11e9-a4a7-5254004f4b70_0
9720b0f0ff00        k8s.gcr.io/pause:3.1   "/pause"                 30 minutes ago      Up 30 minutes                            k8s_POD_kube-proxy-4654x_kube-system_535e2a50-4d4a-11e9-a4a7-5254004f4b70_0
19530f0e9612        a84dd4efbe5f           "kube-scheduler --ad…"   31 minutes ago      Up 31 minutes                            k8s_kube-scheduler_kube-scheduler-docker1_kube-system_ee7b1077c61516320f4273309e9b4690_0
65296718b6f3        k8s.gcr.io/pause:3.1   "/pause"                 31 minutes ago      Up 31 minutes                            k8s_POD_kube-scheduler-docker1_kube-system_ee7b1077c61516320f4273309e9b4690_0
dd1e68372bb4        b9a2d5b91fd6           "kube-controller-man…"   31 minutes ago      Up 31 minutes                            k8s_kube-controller-manager_kube-controller-manager-docker1_kube-system_ce6614527f7b9b296834d491867f5fee_0
aa5a7e93f43b        k8s.gcr.io/pause:3.1   "/pause"                 31 minutes ago      Up 31 minutes                            k8s_POD_kube-controller-manager-docker1_kube-system_ce6614527f7b9b296834d491867f5fee_0
8c1108cd24a7        6e3fa7b29763           "kube-apiserver --au…"   31 minutes ago      Up 31 minutes                            k8s_kube-apiserver_kube-apiserver-docker1_kube-system_6d52485b839af4dc2fad49dc4a448eaa_0
c71383630c98        k8s.gcr.io/pause:3.1   "/pause"                 31 minutes ago      Up 31 minutes                            k8s_POD_kube-apiserver-docker1_kube-system_6d52485b839af4dc2fad49dc4a448eaa_0
d4944af4e5f7        b57e69295df1           "etcd --advertise-cl…"   31 minutes ago      Up 31 minutes                            k8s_etcd_etcd-docker1_kube-system_81963084b5efb20842fffc6e9fd635c6_0
6e4093612e96        k8s.gcr.io/pause:3.1   "/pause"                 31 minutes ago      Up 31 minutes                            k8s_POD_etcd-docker1_kube-system_81963084b5efb20842fffc6e9fd635c6_0
1c19c834d83d        haproxy                "/docker-entrypoint.…"   6 hours ago         Up 6 hours          0.0.0.0:80->80/tcp   compose_haproxy_1
268d63c1cfb9        ubuntu                 "/bin/bash"              6 hours ago         Up 6 hours                               vm1
  1. Deploy node nodes
[root@docker2 mnt]# swapon -s
[root@docker2 mnt]# modprobe ip_vs_wrr
[root@docker2 mnt]# modprobe ip_vs_sh
[root@docker2 mnt]# kubeadm join 172.25.19.1:6443 --token a9ak4p.rtbve8n669je7ojj --discovery-token-ca-cert-hash sha256:f8509327ff23a0f0dc3dd5989ae82718a6287ea56aca7a695043ba0b33142fd3

9. Viewing the node information at the master node, you can see that node 1 has joined the cluster.

[k8s@docker1 ~]$ kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
docker1   Ready    master   34m   v1.12.2
docker2   Ready    <none>   92s   v1.12.2
  1. Adding Firewall Strategy to Automation
[root@foundation19 k8s]# iptables -t nat -I POSTROUTING -s 172.25.19.0/24 -j MASQUERADE
  1. View the pod of all namespaces
[k8s@docker1 ~]$ kubectl get pod --all-namespaces
 Then delete the status that is problematic until all is running, and refresh it a little more.
[k8s@docker1 ~]$ kubectl delete pod coredns-576cbf47c7-bx8cl -n kube-system
[k8s@docker1 ~]$ kubectl get pod --all-namespaces

  • Display all running s

Tags: Docker Kubernetes RPM kubelet

Posted on Tue, 14 May 2019 13:58:56 -0700 by xposed