Finally, you don't have to hit the command -- Dashboard deployment and usage details for Kubernetes

No Commands at All - Dashboard Deployment for Kubernetes


The previous articles take you through the process of deploying a highly available Kubernetes cluster based on binary mode. This article will show you the charm of using ui interface to manage, monitor and use k8s cluster by deploying the web interface of Kubernetes.

Introduce the ip address of the node first, so that it won't be clear when validation is tested





Two LB servers as load balancing have been temporarily blocked because this level can be ignored in this article.

Dishboard deployment process for Kubernetes

We deploy the web ui interface on the master 01 node.

First create a working directory for Dashboard in the k8s working directory

[root@master01 k8s]# mkdir dashboard
[root@master01 k8s]# cd dashboard/
#Download the core file that builds the interface at (
#Five yaml files have been downloaded, and the roles and core parameter configurations of each file will be described later while using it
#There are six below, one of which (k8s-admin.yaml was written by itself for tokens to be used when logging in to the browser)
[root@master01 dashboard]# ls
dashboard-configmap.yaml   dashboard-rbac.yaml    dashboard-service.yaml
dashboard-controller.yaml  dashboard-secret.yaml  k8s-admin.yaml

Let's start by outlining the role of each file in the order that it will be executed later

1. dashboard-rbac.yaml: Used for access control settings, configuring access control permissions for various roles and role bindings (binding roles and service accounts), which include rules configured for each role

2. dashboard-secret.yaml: Provide tokens for access to API servers (personally understood as a security authentication mechanism)

3. dashboard-configmap.yaml: Configuration file, responsible for setting up the Dashboard file

4. dashboard-controller.yaml: responsible for the creation of controllers and service accounts

5. dashboard-service.yaml: responsible for providing services in containers

Create resources with the kubectl create command to understand the following explanations by comparing Chinese files

1,Regulations kubernetes-dashboard-minimal Permissions for this role: For example, it has different permissions to get updates, deletes, and so on
[root@master01 dashboard]# kubectl create -f dashboard-rbac.yaml created created
//Several kind s will have several results created in the form of type+apiServer/name
2,Certificate and Key Creation
[root@master01 dashboard]# kubectl create -f dashboard-secret.yaml 
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-key-holder created
3,Configuration files, for clusters dashboard Creation of settings
[root@master01 dashboard]# kubectl create -f dashboard-configmap.yaml 
configmap/kubernetes-dashboard-settings created
4,Controller and service account needed to create container
[root@master01 dashboard]# kubectl create -f dashboard-controller.yaml 
serviceaccount/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
5,Offer the service
[root@master01 dashboard]# kubectl create -f dashboard-service.yaml 
service/kubernetes-dashboard created

2. View the status of creating dashboard s

[root@master01 dashboard]# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-65f974f565-5vgq9   1/1     Running   0          61s

The state is running, indicating that the creation was successful [n kube-system means specifying a pod in the viewing namespace]

3. View information such as ports assigned to services

[root@master01 dashboard]# kubectl get svc -n kube-system
NAME                   TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   <none>        443:30001/TCP   2m31s

This is an internal access address, which you may not need to understand much about in this article.

PS: The svc here is short for service, you can see which command parameters can be abbreviated by the following command

[root@master01 dashboard]# kubectl api-resources
#More content you can try to verify yourself

4. Test access web ui interface address (combined with mapped port number)

Start by looking at the assigned node servers using the following commands (which were also used in earlier articles but are not explained much)

[root@master01 dashboard]# kubectl get pods,svc -n kube-system -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE
pod/kubernetes-dashboard-65f974f565-5vgq9   1/1     Running   0          12m   <none>

NAME                           TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE   SELECTOR
service/kubernetes-dashboard   NodePort   <none>        443:30001/TCP   12m   k8s-app=kubernetes-dashboard

From the result of execution, it is assigned to the node02 server and the access entry is port 30001. Open the browser to test the access. The result and reason analysis are shown in the following figure (you need to click on the Advanced control in the Hide Details location to see the details)

5. Solve the problem of encrypted communication

Therefore, we need to write the corresponding certificate for this build.The shell script used by the author here to quickly generate the certificate file

The script (the script name is reads as follows, as described earlier when we deployed the etcd cluster on a single node and installed the apiserver component.

cat > dashboard-csr.json <<EOF
   "CN": "Dashboard",
   "hosts": [],
   "key": {
       "algo": "rsa",
       "size": 2048
   "names": [
           "C": "CN",
           "L": "BeiJing",
           "ST": "BeiJing"

#Define a variable that uses a location variable assignment to specify the location of your certificate (dependent on it)
#Create and self-sign based on a certificate in a specified location
cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard
#Empty Authentication in Namespace
kubectl delete secret kubernetes-dashboard-certs -n kube-system
#Re-create build to specified directory (current directory)
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system

In addition, the dashboard-controller.yaml file needs to be modified at this time

#dashboard-controller.yaml adds two lines to the certificate, then apply
#     args:
#        - --auto-generate-certificates
#   The following two additional lines specify the encryption (tls) key and the certificate file (at line 47 of the file) which are the two certificate files generated after executing the script below
#        - --tls-key-file=dashboard-key.pem
#        - --tls-cert-file=dashboard.pem

Execute the script and don't forget to add location variables

#First look at the files in the certificate directory
[root@master01 dashboard]# ls /root/k8s/k8s-cert/
admin.csr       admin.pem       ca-csr.json          kube-proxy-key.pem  server-csr.json
admin-csr.json  ca-config.json  ca-key.pem   kube-proxy.csr       kube-proxy.pem      server-key.pem
admin-key.pem   ca.csr          ca.pem       kube-proxy-csr.json  server.csr          server.pem
#Execute the script you just wrote
[root@master01 dashboard]# bash /root/k8s/k8s-cert/
2020/05/07 23:51:08 [INFO] generate received request
2020/05/07 23:51:08 [INFO] received CSR
2020/05/07 23:51:08 [INFO] generating key: rsa-2048
2020/05/07 23:51:08 [INFO] encoded CSR
2020/05/07 23:51:08 [INFO] signed certificate with serial number 404952983625314812290291880178217049372359470061
2020/05/07 23:51:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (;
specifically, section 10.2.3 ("Information Requirements").
secret "kubernetes-dashboard-certs" deleted
secret/kubernetes-dashboard-certs created
#Two certificates will be generated in this directory
[root@master01 dashboard]# find . -name "*.pem"

You need to redeploy at this point using the following commands (possibly replacing the assigned nodes!)

[root@master01 dashboard]# kubectl apply -f dashboard-controller.yaml 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/kubernetes-dashboard configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/kubernetes-dashboard configured

There is a warning here that apply-updated resources should be created by the kubectl creation plus the--save-config parameter or by apply (which is created when the resource does not exist), which can be ignored here.

Unavoidable error, here is a second look at the assigned node server address and port number (changed)

[root@master01 dashboard]# kubectl get pods,svc -n kube-system -o wide
NAME                                        READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE
pod/kubernetes-dashboard-7dffbccd68-z6xcj   1/1     Running   0          4m57s   <none>

NAME                           TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE   SELECTOR
service/kubernetes-dashboard   NodePort   <none>        443:30001/TCP   62m   k8s-app=kubernetes-dashboard

Run the access test again, and the results are as follows

The following dialog box appears after clicking

6. Resolve the token problem and finally achieve a successful access to dashboard interface

Now let's take a look at the yaml file we wrote

vim k8s-admin.yaml

apiVersion: v1
kind: ServiceAccount
  name: dashboard-admin
  namespace: kube-system
kind: ClusterRoleBinding
  name: dashboard-admin
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kube-system
  kind: ClusterRole
  name: cluster-admin

Generate token operation flow

#Create from this file
[root@master01 dashboard]# kubectl create -f k8s-admin.yaml 
serviceaccount/dashboard-admin created created
#Get a brief toke with the name dashboard-admin-token-rf7x2 
[root@master01 dashboard]# kubectl get secret -n kube-system
NAME                               TYPE                                  DATA   AGE
dashboard-admin-token-rf7x2   3      25s
default-token-4mkb6         3      3d4h
kubernetes-dashboard-certs         Opaque                                11     29m
kubernetes-dashboard-key-holder    Opaque                                2      84m
kubernetes-dashboard-token-m8tlw   3      84m

#View token serial number (details)
[root@master01 dashboard]# kubectl describe secret dashboard-admin-token-rf7x2 -n kube-system
Name:         dashboard-admin-token-rf7x2
Namespace:    kube-system
Labels:       <none>
Annotations: dashboard-admin


ca.crt:     1359 bytes
namespace:  11 bytes
#Following the token colon is the token code we need, which is a bit long
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcmY3eDIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYTMxYzQ0NjQtOTA3ZS0xMWVhLTllYzgtMDAwYzI5MDY5NzA0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.IuGFrBYgeiY2yhOwmKRe3Khqs43Z197vlokr6dt-ZW1z8g8lwD7nYahb4qZQJrnkN7ibqvSoX4goCBaXI94Jk4RqmPbpnfHq-gt40tnzYBuXRKWup4GAt-b1JpnDv9cQaC20Hb30R3QGqxtbejSEYXZD3IHxVGBWepa59Lals9Xo9J4dRasHSpOHpE279JITayev4AsafBuURtOmAd0jf8DD9tmWzQzQ4i48d7YwR_KeOENi7KNi3zNS0fWFYdtUlHVS_6SAq35ioS3Rrwu1hf4ToOueJXRWRsq-JVGqj8AC4moDsz7vQFNh4tevbZqocRPq1ImFSy4bmRbGO_AMtw

We will copy the token serial number into the browser page, click Login to get the following interface

We can see if this resource is running in the cluster

[root@master01 dashboard]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-cnhsl   1/1     Running   0          2d4h

Look at the name and you know it's OK.

So let's simply experience how convenient this dashboard is with this nginx service

First, we click on Container Group in the sidebar, click on the name of the container, go to a page, click on Run Command or Log Control at the top right and another extra page will pop up

Below are the pages to enter this container's web interface and view log records

Execute Command - Access Container on Node 02

[root@node02 ~]# curl
<!DOCTYPE html>
<title>Welcome to nginx!</title>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=""></a>.<br/>
Commercial support is available at
<a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>
[root@node02 ~]# 

View log update results from dashboar page

That's the end of this dashboard deployment and experience!


This time, it is mainly about the deployment and simple experience of dashboard, a visual interface in the Kubernetes cluster.I'm sure you'll find that using a visual interface is very friendly and simplifies the operation of commands.It also comes with various services such as monitoring.

In this deployment, we need to learn the whole process of deploying dashboard, analyze the problems, and then solve them.

Finally, enjoy the simple use of the dashboard page. Thank you for reading!We look forward to your continued attention!Your attention must be the greatest power for the author to move forward!

Tags: Kubernetes Nginx JSON github

Posted on Fri, 08 May 2020 10:18:48 -0700 by devain