Install Kubernetes cluster on CentOS and configure Kubernetes master node

The installation configuration runs the master node of Kubernetes, which includes the database of etcd, that is, etcd is not separated from the master node, and is used for the installation of odd master nodes.

1. Generate the warehouse configuration file of kubernets

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2. Install kubeadm, kubelet and kubectl

# yum install kubeadm kubelet kubectl

Set kubectl self start

# systemctl enable kubectl

3. Initial master node

The initialization command 'kubeadm' can use parameter passing and yaml configuration files.

1) prepare required components for initialization (optional)

During the initialization process, the required container image will be pulled first, and the image will be stored on k8s.gcr.io, which requires scientific Internet access. Therefore, you can prepare the image before executing the initialization command.

Get the list of required components

#kubeadm config images list

Pull command to get components directly

# kubeadm config images pull

If the warehouse has not been changed, it will be obtained from k8s.gcr.io by default. It is recommended to use it directly in a scientifically accessible environment, or pull it from an image site.

# vim k8s-pull-images.sh

#!/bin/bash
REGISTRY=gcr.azk8s.cn/google-containers

images=(
  kube-apiserver:v1.16.3
  kube-controller-manager:v1.16.3
  kube-scheduler:v1.16.3
  kube-proxy:v1.16.3
  pause:3.1
  etcd:3.3.15-0
  coredns:1.6.2
)

for imageName in ${images[@]} ; do
  docker pull ${REGISTRY}/$imageName  
  docker tag ${REGISTRY}/$imageName k8s.gcr.io/$imageName  
  docker rmi ${REGISTRY}/$imageName
done

Note: REGISTRY can also use the image in dockerhub, specifically modify: REGISTRY=mirrorgooglecontainers (Note: only amd64 image).

The list of component packages can be obtained from the command "kubeadm config images list".

# chmod +x k8s-pull-images.sh
# ./k8s-pull-images.sh

After the above script is saved and executed, you can view the results through "docker image list".

Pull image file of non x86 ﹣ 64 architecture

Starting from Docker registry v2.3 and Docker 1.10, Docker implements the Multi architecture Docker image function by supporting the new image Media type manifest list, that is, an image manifest list contains the platform characteristics (CPU arch and OS types) of the existing manifest objects. Specifically, when pulling the image, it will pull it according to the existing host architecture Therefore, it is strongly recommended to pull the image of the platform by running the virtual machine of the simulator, such as KVM + QEMU + QEMU system aarch64 to run the virtual machine of arm64.
If you do not use virtual machine to pull the image, for example, in the amd64 platform to pull down the image of arm64, you need to view the specific labels in the warehouse. Because there are multiple warehouses, such as docker.io and quey.io, there is no uniform rule. Here is a summary of the labels you need to use

  • k8s.gcr.io/kube-apiserver-arm64:v1.16.3
  • k8s.gcr.io/kube-controller-manager-arm64:v1.16.3
  • k8s.gcr.io/kube-scheduler-arm64:v1.16.3
  • k8s.gcr.io/kube-proxy-arm64:v1.16.3
  • k8s.gcr.io/pause:3.1
  • k8s.gcr.io/etcd-arm:3.3.15-0
  • quey.io/coreos/flannel:0.11.0-arm64
    dockerhub warehouse simply uses the prefix of image repository to distinguish different platforms of the same application, as follows:
  • ARMv8 64-bit (arm64v8): https://hub.docker.com/u/arm64v8/
  • Linux x86-64 (amd64): https://hub.docker.com/u/amd64/
  • Windows x86-64 (windows-amd64): https://hub.docker.com/u/winamd64/
    However, dockerhub still suggests building a Multi architecture Docker image. In addition, there is an exception, coredns:
    Docker.io/coredns/coredns: coredns-arm64 (corresponding to the latest version of coredns)
    The tag tag of query.io is written differently, for example:
  • quey.io/coreos/flannel:0.11.0-arm64
  • quey.io/coreos/etcd:3.3.15-0-arm64

Export and import of offline environment image

To deploy k8s in an offline environment, you need to download and export it in an online environment, and then import it in an incoming offline environment

Image export

Commands and formats

docker save -o <path for generated tar file> <image name> [<image2 name>, ...]

Example (single package)

docker save -o kube-apiserver-1.16.3.tar k8s.gcr.io/kube-apiserver:v1.16.3

Example (batch packaging)

docker save -o k8s-master-1.16.3.tar\
 k8s.gcr.io/kube-apiserver:v1.16.3\
 k8s.gcr.io/kube-controller-manager:v1.16.3\
 k8s.gcr.io/kube-scheduler:v1.16.3\
 k8s.gcr.io/kube-proxy:v1.16.3\
 k8s.gcr.io/pause:3.1\
 k8s.gcr.io/etcd:3.3.15-0\
 k8s.gcr.io/coredns:1.6.2

A single package can cope with the deployment of multiple architectures (etcd is mainly separated from the master)

Image import

Commands and formats

docker -i -load <path for generated tar file>

Give an example
Load the image required for initialization

docker load -i k8s-master-1.16.3.tar
//or
docker load -i kube-apiserver-1.16.3.tar
docker load -i kube-controller-1.16.3.tar
docker load -i kube-scheduler-1.16.3.tar
docker load -i kube-proxy-1.16.3.tar
docker load -i pause-3.3.tar
docker load -i etcd-3.3.15-0.tar
docker load -i coredns-1.6.2.tar

2) initialization of command line parameters

# kubeadm init --kubernetes-version="1.16.3" --pod-network-cidr="10.244.0.0/16"  --service-cidr="10.96.0.0/12" --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU --image-reporitory "gcr.azk8s.cn" --dry-run

among

  • --Kubernete version = "1.15.3" specifies the specific version of kubernete. The default "stable-1" is 1.15.0. If it does not meet the requirements, it needs to be modified to the current version. Here it is 1.15.3 (query the command "rpm -qa|grep kubeadm").
  • --Pod network CIDR = "10.244.0.0/16" is a customized pod network, which is generally consistent with the network plug-ins to be deployed (such as: flannel and calico). When used here, the default address of flannel is 10.244.0.0/16, and the default address of calico is 192.168.0.0/16.
  • --Ignore preflight errors =, there are two items here, one is swap, the other is NumCPU. They ignore the error of swap not 0 and the error of CPU not greater than 2 respectively. Because I use a virtual machine with only 1G of memory, so I did not turn off swap; at the same time, a vCPU is allocated to the virtual machine. If swap is not disabled, you need to edit kubelet's configuration file / etc/sysconfig/kubelet, and ignore the error of swap enabling status.
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
  • --Server CIDR specifies the network address assigned by service, which is managed by kubernete. The default address is 10.96.0.0/12.
  • --Image repository specifies the component warehouse address to replace the default "k8s.gcr.io", which is better than the domestic gcr.azk8s.cn.
  • --Dry run is just a trial run to see if there are any errors, no actual initialization.

When the command is executed, k8s.gcr.io will be automatically removed to pull the required image file, and the initialization result will be displayed after successful execution

...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.122.10:6443 --token kt9uzn.793zkkepgvup3jg8 \
    --discovery-token-ca-cert-hash sha256:1b00c8c653c5573ff89c134bd1e62a54f3640813b0e26b79f387fddb402b0b48

3) initialization with configuration file

Get profile

kubeadm config print init-defaults  > kubeadm-init-config.yaml

Modify the configuration file (check the following sections)

...
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
...
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
...
kind: ClusterConfiguration
kubernetesVersion: v1.16.2
networking:
  dnsDomain: cluster.test
  serviceSubnet: 10.96.0.0/16
  podSubnet: 10.244.0.0/16
...
kind: KubeProxyConfiguration
imageRepository: gcr.k8s.io
kubeProxy:
  config:
    featureGates:
      SupportIPVSProxyMode: true
    mode: ipvs

Initialize with command

kubeadm init -f kubeadm-init-config.yaml

The configuration file of the offline environment is the same as that of the root online environment, but the image file needs to be imported first.

4) initialize subsequent operations

Next, you need to prepare the kubectl environment and install the network for the users of the current master node according to the prompts of the above initialization results

create folder

$ mkdir -p ~/.kube
$ cp /etc/kubernetes/admin.conf ~/.kube/config

Install network plug-in

Syntax: "kubectl apply -f [podnetwork].yaml"

Here we use the flannel (developed by coreos). There are specific installation instructions on the GitHub page. The address is https://github.com/cores/flannel.

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

For offline installation, you can download the packaged flannel image and kube-flannel.yml file first, and then use kubectl for installation. The details are as follows:

Download flannel

docker pulll query.io/coreos/flannel

Package flannel and save locally

docker save query.io/coreos/flannel -o <your_save_path>/flannel:0.11.0.tar

Mount the flannel image

docker load -i <your_save_path>/flannel:0.11.0

Download the kube-flannel.yml file

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Installing flannel

kubectl apply -f kube-flannel.yml

You can then use the command "kubectl get Pods - n Kube system" to view it.

Reference resources:

Supplementary notes:

  1. kubeadm-config.yaml composition deployment description:
    • InitConfiguration: used to define some initialization configurations, such as the token used for initialization and the API server address
    • ClusterConfiguration: used to define configuration items related to master components such as apiserver, etcd, network, scheduler, controller manager, etc
    • Kubelet configuration: used to define configuration items related to kubelet components
    • KubeProxyConfiguration: used to define configuration items related to the Kube proxy component

As you can see, in the default kubeadm-config.yaml file, there are only two parts: InitConfiguration and ClusterConfiguration. We can generate the sample files of the other two parts through the following operations:

# Generate KubeletConfiguration sample file 
kubeadm config print init-defaults --component-configs KubeletConfiguration
# Generate KubeProxyConfiguration sample file 
kubeadm config print init-defaults --component-configs KubeProxyConfiguration
  1. kubernete approved docker version problem during kubeadm initialization
[WARNING SystemVerification]: this docker version is not on the list of validated version: 19.01.1. Latest validated version: 18.06

The above versions vary according to the report version of your environment. You can refer to kubernetes's changelog file in git warehouse to determine the supported version of docker, and then follow the command

# yum list docker-ce.x86_64 --showduplicates | sort -r

Get a list of versions and select a specific version to install

sudo yum -y install docker-ce-[VERSION]
  1. kubelet is not set to self start when kubeadm is initialized
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

Solution: execute the self starting command 'systemctl enable kubelet.service'

  1. swap was not disabled during kubeadm initialization:
[ERROR Swap]: running with swap on is not enabled, please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with '--ignore-preflight-errors=...'

Solution: you can add the parameter '-- ignore preflight errors = swap' at the end of kubeadm's command.

  1. View the configuration of the initialized kubeadm
kubeadm config view

Tags: Docker Kubernetes kubelet network

Posted on Tue, 03 Dec 2019 15:35:30 -0800 by jjacquay712