Spring boot + Kafka + elk + docker + docker compose + CentOS to build a log collection system

On the Internet, I used the log collection system built by many netizens for reference, and then built it. In the process of building, I encountered many problems. Finally, after various attempts to build it successfully, I shared the building process with you. This build is based on docker and docker compose.

1. vmware creates two virtual machines, which use centos operating system.

Install docker and docker compose on two machines respectively

Installation details reference Install docker and docker compose

2. Modify the ip address of the two machines

View native ip ifconfig

cd  /etc/sysconfig/network-scripts/
vi ifcfg-ens33
##Modify and restart the network service
service network restart

3. Install kafka on 207 machine. Since kafka relies on zookeeper, install it in order

docker pull wurstmeister/zookeeper
docker run -d --name zookeeper -p localhost:2181 -t wurstmeister/zookeeper
docker pull wurstmeister/kafka

docker run -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=192.168.1.207:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.1.207:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka

After startup, enter kafka container to create producer and consumer tests

##Container entry
docker exec -it kafka /bin/bash
##Go to kafka directory
cd /opt/kafka_2.12-2.2.0/
##Create producer
bin/kafka-console-producer.sh --broker-list  localhost:9092 --topic mytopic 
##Create consumer
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytopic --from-beginning

 

4. Install elk on 208. The elk is installed on github docker-elk

git needs to be installed before downloading the project

##See if git is installed
git --version
##Install git
yum install -y git
##Check whether the installation is successful
git --version
##Delete git
yum remove git

Start elk according to the operation on github and test whether it is successful. If the test is successful, proceed to the next step

5. Since the above is not related to kafka, the modification is not fundamental docker-elk The configuration file logstash/confg/test.conf under the installation directory

input {
      file {
            path => "/logs/input/*"
      }
      stdin {}
      kafka {
            bootstrap_servers =>["192.168.1.207:9092"] #kafka's address
            topics  =>  ["mytopic"] #Multiple consumption topic s are separated by commas
      }	
}

output {
      file {
            path => "/logs/output/%{+yyyy-MM-dd}/mylog.log" #Log input to local and save in format
      }
      stdout {
            codec => rubydebug
      }
      elasticsearch {
		hosts => "elasticsearch:9200"  #es address  
            index => "file-log-%{+YYYY.MM}"   #Name of index in es
	}
}

Restart docker elk after modification

6. The springboot project uses logback to output the log information to kafka

Introduce dependent pom

<!--kafka Dependency package-->
<dependency>
     <groupId>org.apache.kafka</groupId>
     <artifactId>kafka_2.11</artifactId>
     <version>0.8.2.2</version>
</dependency>

Create Formatter interface

package com.open.beijing.kafka.log;

import ch.qos.logback.classic.spi.ILoggingEvent;

public interface Formatter {

    String format(ILoggingEvent event);
}
package com.open.beijing.kafka.log;

import ch.qos.logback.classic.spi.ILoggingEvent;
import com.open.beijing.utils.JodaTimeUtil;

public class MessageFormatter implements Formatter{
    @Override
    public String format(ILoggingEvent event) {
        return JodaTimeUtil.formatDateToString(event.getTimeStamp())+"  "+event.getThreadName()+"  "+event.getLevel()+"  "+event.getFormattedMessage();
    }

}
package com.open.beijing.kafka.log;

import java.util.Properties;


import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.AppenderBase;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

/**
Pass the log information to kafka
**/
public class KafkaAppender extends AppenderBase<ILoggingEvent> {

    private String topic;
    private String zookeeperHost;
    private Producer<String, String> producer;
    private Formatter formatter;
    private String brokerList;

    public String getTopic() {
        return topic;
    }

    public void setTopic(String topic) {
        this.topic = topic;
    }

    public String getZookeeperHost() {
        return zookeeperHost;
    }

    public void setZookeeperHost(String zookeeperHost) {
        this.zookeeperHost = zookeeperHost;
    }

    public Formatter getFormatter() {
        return formatter;
    }

    public void setFormatter(Formatter formatter) {
        this.formatter = formatter;
    }

    public String getBrokerList() {
        return brokerList;
    }

    public void setBrokerList(String brokerList) {
        this.brokerList = brokerList;
    }

    @Override
    public void start() {
        if(this.formatter == null){
            this.formatter = new MessageFormatter();
        }
        super.start();
        Properties props = new Properties();
        props.put("serializer.class", "kafka.serializer.StringEncoder");
        props.put("metadata.broker.list", this.brokerList);
        ProducerConfig config = new ProducerConfig(props);
        this.producer = new Producer<String, String>(config);
    }

    @Override
    public void stop() {
        super.stop();
        this.producer.close();
    }
    @Override
    protected void append(ILoggingEvent event) {
        String payload = this.formatter.format(event);
        KeyedMessage<String, String> data = new KeyedMessage(this.topic, payload);
        this.producer.send(data);
    }

}

7. Create logback.xml in the resource directory

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <appender name="KAFKA" class="com.open.beijing.kafka.log.KafkaAppender">
        <topic>mytopic</topic><!--kafka Of topic-->
        <brokerList>192.168.1.207:9092</brokerList> <!--kafka address-->
    </appender>
    <logger name="org.apache.kafka.clients.consumer.ConsumerConfig" level="off" />
    <logger name="org.apache.kafka" level="off" />
    <root level="warn">
        <appender-ref ref="KAFKA"/>
    </root>
</configuration>

8. Start the project, open kibana to view the data, open the local browser and input http://192.168.1.208:5601

9. Success!!

Tags: Programming kafka Docker git Zookeeper

Posted on Sun, 03 Nov 2019 02:25:51 -0800 by barkster