Kubernetes Cluster Container Run Log Collection

Reference Documents

https://yq.aliyun.com/articles/679721
https://www.cnblogs.com/keithtt/p/6410249.html
https://github.com/kiwigrid/helm-charts/tree/master/charts/fluentd-elasticsearch
https://github.com/kubernetes/kubernetes/tree/5d9d5bca796774a2c12d4e4443e684b619cda7ee/cluster/addons/fluentd-elasticsearch

Kubernetes Log Collection Summary

There are several kinds of journals about kubernetes, three for kubernetes itself:
1. event events at resource runtime. For example, after creating a pod in a k8s cluster, you can view the details of the pod through the kubectl describe pod command.
2. Logs generated by applications running in containers, such as tomcat, nginx and php. For example, kubectl logs redis-master-bobr0. This is also part of most articles on the Internet and officially.
3. Service logs of k8s components, such as system CTL status kubelet.

Container logs are usually collected in the following ways:
1. Collection outside container. Mount the host directory as the container's log directory and collect it on the host.
2. Collection in container. Run a background log collection service in the container.
3. Run the log container separately. Run a separate container to provide shared log volumes and collect logs in the log container.
4. Network collection. Container-in-container applications send logs directly to the log center, such as java programs that can use log4j 2 to convert log formats and send them to the remote end.
5. By modifying docker's log-driver. Log-driver can be set to syslog, fluentd, splunk and other log collection services, and then sent to the remote end.

Brief Introduction to Fluentd-Elastic Search Function

Fluentd is deployed as a DaemonSet which spawns a pod on each node that reads logs, generated by kubelet, container runtime and containers and sends them to Elasticsearch.
Fluentd is deployed as a daemon set that generates a pod on each node that reads the logs generated by kubelet, container runtime, and container and sends them to Elastic Search.

Installation Deployment

1. Download

[root@elasticsearch01 yaml]# git clone https://github.com/kiwigrid/helm-charts
Cloning into 'helm-charts'...
remote: Enumerating objects: 33, done.
remote: Counting objects: 100% (33/33), done.
remote: Compressing objects: 100% (23/23), done.
remote: Total 1062 (delta 13), reused 25 (delta 10), pack-reused 1029
Receiving objects: 100% (1062/1062), 248.83 KiB | 139.00 KiB/s, done.
Resolving deltas: 100% (667/667), done.

[root@elasticsearch01 yaml]# cd helm-charts/fluentd-elasticsearch
[root@elasticsearch01 fluentd-elasticsearch]# ls
Chart.yaml  OWNERS  README.md  templates  values.yaml

2. Modify the values.yaml configuration
Mainly modify fluentd mirror address, elastic search address, index prefix and other information

[root@elasticsearch01 fluentd-elasticsearch]# cat values.yaml |grep -Ev "^#|^$"
image:
  repository: registry.cn-beijing.aliyuncs.com/minminmsn/fluentd-elasticsearch
  tag: v2.5.2
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistrKeySecretName
awsSigningSidecar:
  enabled: false
  image:
    repository: abutaha/aws-es-proxy
    tag: 0.9
priorityClassName: ""
hostLogDir:
  varLog: /var/log
  dockerContainers: /var/lib/docker/containers
  libSystemdDir: /usr/lib64
resources: {}
  # limits:
  #   cpu: 100m
  #   memory: 500Mi
  # requests:
  #   cpu: 100m
  #   memory: 200Mi
elasticsearch:
  auth:
    enabled: false
    user: "yourUser"
    password: "yourPass"
  buffer_chunk_limit: 2M
  buffer_queue_limit: 8
  host: '10.2.8.44'
  logstash_prefix: 'logstash'
  port: 9200
  scheme: 'http'
  ssl_version: TLSv1_2
fluentdArgs: "--no-supervisor -q"
env:
  # OUTPUT_USER: my_user
  # LIVENESS_THRESHOLD_SECONDS: 300
  # STUCK_THRESHOLD_SECONDS: 900
secret:
rbac:
  create: true
serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  # The name of the ServiceAccount to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
podSecurityPolicy:
  enabled: false
  annotations: {}
    ## Specify pod annotations
    ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
    ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
    ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
    ##
    # seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
    # seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
    # apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
livenessProbe:
  enabled: true
annotations: {}
podAnnotations: {}
  # prometheus.io/scrape: "true"
  # prometheus.io/port: "24231"
updateStrategy:
  type: RollingUpdate
tolerations: {}
  # - key: node-role.kubernetes.io/master
  #   operator: Exists
  #   effect: NoSchedule
affinity: {}
  # nodeAffinity:
  #   requiredDuringSchedulingIgnoredDuringExecution:
  #     nodeSelectorTerms:
  #     - matchExpressions:
  #       - key: node-role.kubernetes.io/master
  #         operator: DoesNotExist
nodeSelector: {}
service: {}
  # type: ClusterIP
  # ports:
  #   - name: "monitor-agent"
  #     port: 24231
serviceMonitor:
  ## If true, a ServiceMonitor CRD is created for a prometheus operator
  ## https://github.com/coreos/prometheus-operator
  ##
  enabled: false
  interval: 10s
  path: /metrics
  labels: {}
prometheusRule:
  ## If true, a PrometheusRule CRD is created for a prometheus operator
  ## https://github.com/coreos/prometheus-operator
  ##
  enabled: false
  prometheusNamespace: monitoring
  labels: {}
  #  role: alert-rules
configMaps:
  useDefaults:
    systemConf: true
    containersInputConf: true
    systemInputConf: true
    forwardInputConf: true
    monitoringConf: true
    outputConf: true
extraConfigMaps:
  # system.conf: |-
  #   <system>
  #     root_dir /tmp/fluentd-buffers/
  #   </system>

3.helm installs fluentd

[root@elasticsearch01 fluentd-elasticsearch]# helm  install .
NAME:   sanguine-dragonfly
LAST DEPLOYED: Thu Jun  6 16:07:55 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ServiceAccount
NAME                                      SECRETS  AGE
sanguine-dragonfly-fluentd-elasticsearch  0        0s

==> v1/ClusterRole
NAME                                      AGE
sanguine-dragonfly-fluentd-elasticsearch  0s

==> v1/ClusterRoleBinding
NAME                                      AGE
sanguine-dragonfly-fluentd-elasticsearch  0s

==> v1/DaemonSet
NAME                                      DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
sanguine-dragonfly-fluentd-elasticsearch  0        0        0      0           0          <none>         0s

==> v1/ConfigMap
NAME                                      DATA  AGE
sanguine-dragonfly-fluentd-elasticsearch  6     0s

NOTES:
1. To verify that Fluentd has started, run:

  kubectl --namespace=default get pods -l "app.kubernetes.io/name=fluentd-elasticsearch,app.kubernetes.io/instance=sanguine-dragonfly"

THIS APPLICATION CAPTURES ALL CONSOLE OUTPUT AND FORWARDS IT TO elasticsearch . Anything that might be identifying,
including things like IP addresses, container images, and object names will NOT be anonymized.

4. Inspection of installation effect

[root@elasticsearch01 fluentd-elasticsearch]# kubectl get pods |grep flu
sanguine-dragonfly-fluentd-elasticsearch-hrxbp   1/1     Running   0          26m
sanguine-dragonfly-fluentd-elasticsearch-jcznt   1/1     Running   0          26m

Elastic stack operation

1.Elasticsearch
An index of logstash-2019.06.06 style is generated on elastic search, which is produced by day by default and prefix logstash is set in values.yaml configuration file.

2.Kibana
Management--Create Index Pattern--logstash-2019*--Discover

Tags: Linux ElasticSearch Kubernetes github kubelet

Posted on Thu, 06 Jun 2019 11:57:07 -0700 by benjam