Summary of problems in k8s starting container application

1. Create a deployment of nginx, and view the ip information allocated by the pod. ping is used on the node node where the pod is located, but not on the master node

Analysis: the first thing I think about is that flanneld is not running normally on two nodes

1. View the network card information of flanneld

[root@k8s2-1 ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1472
        inet 10.25.1.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::42:c5ff:fe30:de12  prefixlen 64  scopeid 0x20<link>
        ether 02:42:c5:30:de:12  txqueuelen 0  (Ethernet)
        RX packets 58  bytes 3880 (3.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 18  bytes 1412 (1.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.191.21  netmask 255.255.255.0  broadcast 192.168.191.255
        inet6 fe80::370c:bd9e:9b5a:ed83  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:de:b2:e0  txqueuelen 1000  (Ethernet)
        RX packets 34986  bytes 15926405 (15.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 33482  bytes 6913324 (6.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 10.25.1.0  netmask 255.255.0.0  destination 10.25.1.0
        inet6 fe80::1b37:644b:a7ca:c658  prefixlen 64  scopeid 0x20<link>
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 785  bytes 65940 (64.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 19  bytes 1488 (1.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 247  bytes 29744 (29.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 247  bytes 29744 (29.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
veth5112d6a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1472
        inet6 fe80::54f3:44ff:fed1:43f3  prefixlen 64  scopeid 0x20<link>
        ether 56:f3:44:d1:43:f3  txqueuelen 0  (Ethernet)
        RX packets 18  bytes 1412 (1.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 41  bytes 3290 (3.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel0 network card information is created automatically after the flanneld component is installed. A tun virtual network card receives the data of POD which is not on the same host, and then forwards the received data to the flanneld process

venth5112a network card: cni0 is a bridge shared by the same host pod. When kubelet creates a container, it will create a virtual network card vethxxx for this container and bridge it to the cni0 bridge

2. Check the network configuration information saved in etcd. The information is normal

[root@k8s1-1 ~]# etcdctl  ls
/k8s
/registry
[root@k8s1-1 ~]# etcdctl  ls  /k8s/network
/k8s/network/subnets
/k8s/network/config
[root@k8s1-1 ~]# etcdctl  ls  /k8s/network/subnets
/k8s/network/subnets/10.25.1.0-24
/k8s/network/subnets/10.25.15.0-24
/k8s/network/subnets/10.25.92.0-24
[root@k8s1-1 ~]# etcdctl  ls  /k8s/network/subnets/10.25.1.0-24
/k8s/network/subnets/10.25.1.0-24
[root@k8s1-1 ~]# etcdctl  get   /k8s/network/subnets/10.25.1.0-24
{"PublicIP":"192.168.191.21"}
[root@k8s1-1 ~]# etcdctl  get   /k8s/network/subnets/10.25.15.0-24
{"PublicIP":"192.168.191.20"}
[root@k8s1-1 ~]# etcdctl  get   /k8s/network/subnets/10.25.15.0-24

3. Check the routing information of the node node, including the xxx.xxx.1.0 xxx.xxx.1.1 bridge information, which can be pinged in the master. Specifically, when it comes to the pod's ip, it is not pinged, which is very surprising.

4. The last thing I think about is that k8s is also related to iptables. After executing iptables-p input accept; iptables-p forward accept; iptables-f, ping again succeeded.

2.{kubelet k8s-node-2} spec.containers{ct} Warning BackOff Back-off restarting failed docker container

If the image does not run, it will exit directly

Tags: Operation & Maintenance network iptables kubelet Nginx

Posted on Sat, 14 Mar 2020 08:45:29 -0700 by timbo6585