k8s Deployment - Multi-node Deployment and Load Balancing Setup

Introduction to multi-node deployment

  • In the production environment, we also consider the high availability of the platform when building the kubernetes platform. The kubenetes platform is managed by the master center and each node server is deployed and managed by the master server. In the previous articles, we built a single node (a master server) deployment. When the master server is down, the platform we built cannot makeYes, at this point we should consider the deployment of multi-node (multi-master), which has reached the high availability of platform services.

Introduction to load balancing

  • When we set up a multi-node deployment, multiple masters are working at the same time, always using the same master to complete the work when dealing with work problems. When the master server is facing multiple requests, the processing speed will be slower, and it is a waste of resources for the rest of the master servers not to process requests. We should consider doing load balancing services at this time.

  • This build load balancing uses nginx service for four-tier load balancing and keepalived for elegant address

Experimental Deployment

Experimental environment

  • lb01:192.168.80.19 (Load Balancing Server)
  • lb02:192.168.80.20 (Load Balancing Server)
  • Master01:192.168.80.12
  • Master01:192.168.80.11
  • Node01:192.168.80.13
  • Node02:192.168.80.14

Multiple master deployment

  • Master 01 server operation
    [root@master01 kubeconfig]# Scp-r/opt/kubernetes/root@192.168.80.11:/opt //Copy the kubernetes directory directly to master02
    The authenticity of host '192.168.80.11 (192.168.80.11)' can't be established.
    ECDSA key fingerprint is SHA256:Ih0NpZxfLb+MOEFW8B+ZsQ5R8Il2Sx8dlNov632cFlo.
    ECDSA key fingerprint is MD5:a9:ee:e5:cc:40:c7:9e:24:5b:c1:cd:c1:7b:31:42:0f.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '192.168.80.11' (ECDSA) to the list of known hosts.
    root@192.168.80.11's password:
    token.csv                                                                  100%   84    61.4KB/s   00:00
    kube-apiserver                                                             100%  929     1.6MB/s   00:00
    kube-scheduler                                                             100%   94   183.2KB/s   00:00
    kube-controller-manager                                                    100%  483   969.2KB/s   00:00
    kube-apiserver                                                             100%  184MB 106.1MB/s   00:01
    kubectl                                                                    100%   55MB  85.9MB/s   00:00
    kube-controller-manager                                                    100%  155MB 111.9MB/s   00:01
    kube-scheduler                                                             100%   55MB 115.8MB/s   00:00
    ca-key.pem                                                                 100% 1675     2.7MB/s   00:00
    ca.pem                                                                     100% 1359     2.6MB/s   00:00
    server-key.pem                                                             100% 1679     2.5MB/s   00:00
    server.pem                                                                 100% 1643     2.7MB/s   00:00
    [root@master01 kubeconfig]# Scp/usr/lib/systemd/system/{kube-apiserver, kube-controller-manager, kube-scheduler}.service root@192.168.80.11:/usr/lib/systemd/system//copy the startup script for three components in master
    root@192.168.80.11's password:
    kube-apiserver.service                                                     100%  282   274.4KB/s   00:00
    kube-controller-manager.service                                            100%  317   403.5KB/s   00:00
    kube-scheduler.service                                                     100%  281   379.4KB/s   00:00
    [root@master01 kubeconfig]# Scp-r/opt/etcd/ root@192.168.80.11:/opt/ //Special note: master 02 must have an etcd certificate, otherwise the apiserver service cannot start copying the etcd certificate already on master 01 for master 02 to use
    root@192.168.80.11's password:
    etcd                                                                       100%  509   275.7KB/s   00:00
    etcd                                                                       100%   18MB  95.3MB/s   00:00
    etcdctl                                                                    100%   15MB  75.1MB/s   00:00
    ca-key.pem                                                                 100% 1679   941.1KB/s    00:00
    ca.pem                                                                     100% 1265     1.6MB/s   00:00
    server-key.pem                                                             100% 1675     2.0MB/s   00:00
    server.pem                                                                 100% 1338     1.5MB/s   00:00
  • Master 02 server operation
    [root@master02 ~]# systemctl stop firewalld.service //close firewall
    [root@master02 ~]# setenforce 0 //close selinux
    [root@master02 ~]# Vim/opt/kubernetes/cfg/kube-apiserver //Change file
    ...
    --etcd-servers=https://192.168.80.12:2379,https://192.168.80.13:2379,https://192.168.80.14:2379 \
    --bind-address=192.168.80.11 \       //Change IP Address
    --secure-port=6443 \
    --advertise-address=192.168.80.11 \   //Change IP Address
    --allow-privileged=true \
    --service-cluster-ip-range=10.0.0.0/24 \
    ...
    :wq
    [root@master02 ~]# systemctl start kube-apiserver.service//start apiserver service
    [root@master02 ~]# systemctl enable kube-apiserver.service//Set boot-up self-start
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/ systemd/system/kube-apiserver.service.
    [root@master02 ~]# systemctl start kube-controller-manager.service//start controller-manager
    [root@master02 ~]# systemctl enable kube-controller-manager.service //Set boot-up self-start
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. 
    [root@master02 ~]# systemctl start kube-scheduler.service//start scheduler
    [root@master02 ~]# systemctl enable kube-scheduler.service//Set boot-up self-start
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/ systemd/system/kube-scheduler.service.
    [root@master02 ~]# Vim/etc/profile//Edit Add Environment Variable
    ...
    export PATH=$PATH:/opt/kubernetes/bin/
    :wq
    [root@master02 ~]# Source/etc/profile//reexecute
    [root@master02 ~]# kubectl get node //View node information
    NAME            STATUS   ROLES    AGE    VERSION
    192.168.80.13   Ready    <none>   146m   v1.12.3
    192.168.80.14   Ready    <none>   144m   v1.12.3    //Multiple master s successfully configured

Load Balancing Deployment

  • lb01, lb02 synchronization operation

    [root@lb01 ~]# systemctl stop firewalld.service
    [root@lb01 ~]# setenforce 0
    [root@lb01 ~]# Vim/etc/yum.repos.d/nginx.repo //Configure nginx service Yum source
    [nginx]
    name=nginx repo
    baseurl=http://nginx.org/packages/centos/7/$basearch/
    gpgcheck=0
    :wq
    [root@lb01 yum.repos.d]# yum list //reload yum
    //Plugin loaded: fastestmirror
    base                                                                                  | 3.6 kB  00:00:00
    extras                                                                                | 2.9 kB   00:00:00
    ...
    [root@lb01 yum.repos.d]# Yum install nginx-y //install nginx service 
    //Plugin loaded: fastestmirror
    Loading mirror speeds from cached hostfile
    * base: mirrors.aliyun.com
    * extras: mirrors.163.com
    ...
    [root@lb01 yum.repos.d]# Vim/etc/nginx/nginx.conf //Edit nginx configuration file
    ...
    events {
    worker_connections  1024;
    }
    
    stream {                     //Add Four-Layer Forwarding Module
    log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log /var/log/nginx/k8s-access.log main;
    
    upstream k8s-apiserver {
        server 192.168.80.12:6443;          //Note IP Address
        server 192.168.80.11:6443;
    }
    server {
        listen 6443;
        proxy_pass k8s-apiserver;
    }
    }
    
    http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    ...
    :wq
    [root@lb01 yum.repos.d]# systemctl start nginx //start nginx service You can access test nginx service in browser
    [root@lb01 yum.repos.d]# Yum install keepalived-y //install keepalived service 
    //Plugin loaded: fastestmirror
    Loading mirror speeds from cached hostfile
    * base: mirrors.aliyun.com
    * extras: mirrors.163.com
    ...
    [root@lb01 yum.repos.d]# Mount.cifs //192.168.80.2/shares/K8S/k8s02/mnt/ //mount Host Directory
    Password for root@//192.168.80.2/shares/K8S/k8s02:
    [root@lb01 yum.repos.d]# Cp/mnt/keepalived.conf/etc/keepalived/keepalived.conf//Copy the prepared keepalived profile to overwrite the source profile
    cp: Whether to Overwrite"/etc/keepalived/keepalived.conf"? yes
    [root@lb01 yum.repos.d]# Vim/etc/keepalived/keepalived.conf//Edit Profile
    ...
    vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"    //Notice the script location modification
    }
    
    vrrp_instance VI_1 {
    state MASTER
    interface ens33            //Note Network Card Name
    virtual_router_id 51   //VRRP routing ID instance, each instance is unique
    priority 100           //Priority, Standby Server Settings 90
    advert_int 1            //Specify the VRRP Heartbeat Packet Notification Interval, default 1 second
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.100/24       //Elegant Address
    }
    track_script {
        check_nginx
    }
    }
    //Delete all of the following
    :wq
  • lb02 server keepalived profile modification

    [root@lb02 ~]# vim /etc/keepalived/keepalived.conf
    ...
    vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"    //Notice the script location modification
    }
    
    vrrp_instance VI_1 {
    state BACKUP         //Modify role to backup
    interface ens33      //Network Card Name
    virtual_router_id 51   //VRRP routing ID instances, each > instance is unique
    priority 90       //Priority, Standby Server Settings 90
    advert_int 1      //Specify the VRRP Heartbeat Packet Notification Interval, default 1 second
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.100/24       //Virtual IP Address
    }
    track_script {
        check_nginx
    }
    }
    //Delete all of the following
    :wq
  • lb01, lb02 synchronization operation

    [root@lb01 yum.repos.d]# Vim/etc/nginx/check_nginx.sh //Edit script to determine nginx status
    count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
    
    if [ "$count" -eq 0 ];then
    systemctl stop keepalived
    fi
    :wq
    chmod +x /etc/nginx/check_nginx.sh     //Add script execution privileges
    [root@lb01 yum.repos.d]# systemctl start keepalived //Start the service
  • lb01 Server Operation
    [root@lb01 ~]# ip a //View address information
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.80.100/24 scope global secondary ens33    //Virtual Address Configuration Successful
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link
       valid_lft forever preferred_lft forever
  • lb02 server operation
    [root@lb02 ~]# ip a //View address information
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:7d:c7:ab brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.20/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::cd8b:b80c:8deb:251f/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever       //No virtual IP address lb02 belongs to standby service
  • The lb01 server stops the nginx service and again at the lb02 server IP address to see if the virtual IP address drifts successfully
    [root@lb01 ~]# systemctl stop nginx.service
    [root@lb01 nginx]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link
       valid_lft forever preferred_lft forever
    [root@lb02 ~]# ip a //View on lb02 server
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:7d:c7:ab brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.20/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.80.100/24 scope global secondary ens33      //Drift address to lb02
       valid_lft forever preferred_lft forever
    inet6 fe80::cd8b:b80c:8deb:251f/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
  • Restart nginx, keepalived service on lb01 server
    [root@lb01 nginx]# systemctl start nginx
    [root@lb01 nginx]# systemctl start keepalived.service
    [root@lb01 nginx]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.80.100/24 scope global secondary ens33     //Drift address was preempted back because priority was configured
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link
       valid_lft forever preferred_lft forever
  • Modify the configuration file at all node s
    [root@node01 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
    ...
    server: https://192.168.80.100:6443
    ...
    :wq
    [root@node01 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
    ...
    server: https://192.168.80.100:6443
    ...
    :wq
    [root@node01 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
    ...
    server: https://192.168.80.100:6443
    ...
    :wq
    [root@node01 ~]# systemctl restart kubelet.service //restart service
    [root@node01 ~]# systemctl restart kube-proxy.service
  • View log information on lb01 server
    [root@lb01 nginx]# tail /var/log/nginx/k8s-access.log
    192.168.80.13 192.168.80.12:6443 - [11/Feb/2020:15:23:52 +0800] 200 1118
    192.168.80.13 192.168.80.11:6443 - [11/Feb/2020:15:23:52 +0800] 200 1119
    192.168.80.14 192.168.80.12:6443 - [11/Feb/2020:15:26:01 +0800] 200 1119
    192.168.80.14 192.168.80.12:6443 - [11/Feb/2020:15:26:01 +0800] 200 1120

    Multi-node setup and load balancing configuration complete

Tags: Nginx yum vim Kubernetes

Posted on Tue, 11 Feb 2020 10:02:12 -0800 by chu70077