My big data Tour - Kafka environment construction

 

Environmental preparation:

Three CentOS virtual machines with JDK and Zookeeper installed

 

Environment building

 

1) download the compressed package

https://www.apache.org/dyn/closer.cgi?path=/kafka/2.2.0/kafka_2.12-2.2.0.tgz

2) decompression:

[feng@hadoop129 software]$ ls
kafka_2.11-2.2.0.tgz
[feng@hadoop129 software]$ tar -zxf kafka_2.11-2.2.0.tgz
[feng@hadoop129 software]$ ls
kafka_2.11-2.2.0  kafka_2.11-2.2.0.tgz
[feng@hadoop129 software]$

3) move to the specified directory:

[feng@hadoop129 software]$ mv kafka_2.11-2.2.0 /opt/module/

4) create the logs directory under / opt / module / Kafka ﹤ 2.11-2.2.0:

[feng@hadoop129 kafka_2.11-2.2.0]$ mkdir logs
[feng@hadoop129 kafka_2.11-2.2.0]$ ls
bin  config  libs  LICENSE  logs  NOTICE  site-docs

5) modify the configuration file:

[feng@hadoop129 config]$ vim /opt/module/kafka_2.11-2.2.0/config/server.properties

The modification and addition of the file are as follows:

#modify
broker.id=129

#Newly added
#Delete topic function enable
delete.topic.enable=true


#Modify the path of kafka operation log
log.dirs=/opt/module/kafka_2.11-2.2.0/logs


#Modify the configuration connection Zookeeper cluster address
zookeeper.connect=hadoop129:2181,hadoop130:2181,hadoop131:2181

6) configure environment variables:

[feng@hadoop129 kafka_2.11-2.2.0]$ sudo vim /etc/profile

To add a new configuration, execute the command source /etc/profile:

#KAFAKA_HOME
export KAFAKA_HOME=/opt/module/kafka_2.11-2.2.0
export PATH=$PATH:${KAFAKA_HOME}/bin

7) synchronize the entire folder kafaka ﹣ 2.11-2.2.0 to the other two machines: Hadoop 130 / Hadoop 131 and modify the broker.id in the configuration file

Hadoop 130 machine:

      broker.id = 130

Hadoop 131 machine:

      broker.id = 131

8) start zookeeper first, then kafka

[feng@hadoop129 kafka_2.11-2.2.0]$ pwd
/opt/module/kafka_2.11-2.2.0
[feng@hadoop129 kafka_2.11-2.2.0]$ bin/kafka-server-start.sh config/server.properties &


[feng@hadoop130 kafka_2.11-2.2.0]$ pwd
/opt/module/kafka_2.11-2.2.0
[feng@hadoop130 kafka_2.11-2.2.0]$ bin/kafka-server-start.sh config/server.properties &


[feng@hadoop131 kafka_2.11-2.2.0]$ pwd
/opt/module/kafka_2.11-2.2.0
[feng@hadoop131 kafka_2.11-2.2.0]$ bin/kafka-server-start.sh config/server.properties &

9) create and view topic s

[feng@hadoop129 kafka_2.11-2.2.0]$ bin/kafka-topics.sh --create --bootstrap-server hadoop129:9092 --replication-factor 1 --partitions 1 --topic kafka-first
[feng@hadoop129 kafka_2.11-2.2.0]$ bin/kafka-topics.sh --list --bootstrap-server hadoop129:9092
__consumer_offsets
first
first-kafaka-topic
kafka-first
test

10) sending information

[feng@hadoop129 kafka_2.11-2.2.0]$ bin/kafka-console-producer.sh --broker-list hadoop129:9092 --topic kafka-first
>Hello,Kafka ;-)
>Learning makes you happy
>

11) view messages

[feng@hadoop129 kafka_2.11-2.2.0]$ bin/kafka-console-consumer.sh --bootstrap-server hadoop129:9092 --topic kafka-first --from-beginning
Hello,Kafka ;-)
Learning makes you happy

12) view the details of a Topic

[feng@hadoop129 kafka_2.11-2.2.0]$ bin/kafka-topics.sh --describe --bootstrap-server hadoop129:9092 --topic kafka-first
Topic:kafka-first       PartitionCount:1        ReplicationFactor:1     Configs:segment.bytes=1073741824
        Topic: kafka-first      Partition: 0    Leader: 129     Replicas: 129   Isr: 129
[feng@hadoop129 kafka_2.11-2.2.0]$

The article refers to the Quick start kafka video course of monk school.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Tags: kafka Zookeeper Hadoop vim

Posted on Tue, 05 Nov 2019 10:31:51 -0800 by Xorandnotor