[7.1.1] ES cluster built by ELK cluster

Written in the front

I finished this blog last night, just to test whether it runs normally. I didn't finish it in the evening. I took my computer with me before I went to work, but the result was not saved! Basically, I lost what I wrote yesterday. Fortunately, the photo url of blog park is still there.

In order to make it easier for everyone, I write it easily, and you can read it easily. It is planned to reduce the length of the article and split it into multiple parts, so that the update frequency will be increased, the writing will not look so tired, and there will be no problem of losing so many manuscripts at one time

This paper describes the cluster part of elastic search, and the specific structure will be described below

Deployment framework

Holistic diagram

Part of the structure of this paper

Nodes 1-3 are the data nodes of the cluster, while competing for the master node and the tribe node is the tribal node, which is responsible for the connection between Logstash and Kibana

The advantage is that there is no need to specify the master node, and there is no need to start one more node only responsible for coordination to reduce resource waste.

Environmental preparation

  • GNU/Debian Stretch 9.9 linux-4.19
  • elasticsearch-7.1.1-linux-x86_64.tar.gz

In order to simulate, using the centos7 of Docker, the Docker operation part will not appear in this paper, which is the same as the normal host

Start building

1.root edit / etc/security/limits.conf sudo vim /etc/security/limits.conf

Add the following:

* soft memlock unlimited
* hard memlock unlimited

Where * can be replaced with the linux user name to start es

Save exit. Restart is required to take effect

2. [optional] disable the swap partition ᦇ echo "VM. Swappiness = 1" > > / etc / sysctl.conf, greatly improving the configuration performance

3. Restart the system

Some configurations cannot take effect without restarting, and the error is still reported after es is started

4. Add users and groups for each host

sudo groupadd elasticsearch #Add elasticsearch group
sudo usermod -aG elasticsearch User name #Add elasticsearch user

5. Extract elasticsearch-7.1.1-linux-x86_.tar.gz and copy it to each host / home/elasticsearch

6. Add configuration for the contents of / home/elasticsearch/elasticsearch-7.1.1/config/elasticsearch.yml of each host respectively

es-node-1

# ======================== Elasticsearch Configuration =========================
cluster.name: es-cluster
node.name: node-1 
node.attr.rack: r1 
bootstrap.memory_lock: true 
http.port: 9200
network.host: 172.17.0.2
transport.tcp.port: 9300
discovery.seed_hosts: ["172.17.0.3:9300","172.17.0.4:9300","172.17.0.5:9300"] 
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"] 
gateway.recover_after_nodes: 2

es-node-2

# ======================== Elasticsearch Configuration =========================
cluster.name: es-cluster
node.name: node-2 
node.attr.rack: r1 
bootstrap.memory_lock: true 
http.port: 9200
network.host: 172.17.0.3
transport.tcp.port: 9300
discovery.seed_hosts: ["172.17.0.2:9300","172.17.0.4:9300","172.17.0.5:9300"] 
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"] 
gateway.recover_after_nodes: 2

es-node-3

# ======================== Elasticsearch Configuration =========================
cluster.name: es-cluster
node.name: node-3 
node.attr.rack: r1 
bootstrap.memory_lock: true 
http.port: 9200
network.host: 172.17.0.4
transport.tcp.port: 9300
discovery.seed_hosts: ["172.17.0.3:9300","172.17.0.2:9300","172.17.0.5:9300"] 
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"] 
gateway.recover_after_nodes: 2

es-tribe-node

# ======================== Elasticsearch Configuration =========================
cluster.name: es-cluster
node.name: tribe-node 
node.master: false
node.data: false
node.attr.rack: r1 
bootstrap.memory_lock: true 
http.port: 9200
network.host: 172.17.0.5
transport.tcp.port: 9300
discovery.seed_hosts: ["172.17.0.3:9300","172.17.0.4:9300","172.17.0.2:9300"] 
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"] 
gateway.recover_after_nodes: 2

Please refer to each parameter description at the end of the paper

7. Use the command to start each node es? Java? Opts = "- xms512m - xmx512m" bin / elasticsearch

Be careful:

  • Only non root users can be used here, that is, the account created at the beginning of the article
  • The path of the extract directory of this command relative to es
  • JVM parameter heap size can be adjusted by itself

Viewing effect

Use the browser here to view

The triple node node in this figure is not mdi. It's the figure i forgot to add node.data: false. Now it's i
It can be seen that node-3 is the master node. Recently, a useful elastic search viewing tool, cerebro, has been found

View with cerebro cerebro github

You can right-click the new tab to open the large picture if you can't see the picture clearly

Click nodes to view the status of each node

You can also modify the cluster settings through more, which is very powerful

Explanation of parameter configuration of elasticsearch.yml

cluster.name: es-cluster #Specify es cluster name
node.name: xxxx #Specify the current es node name
node.data: false #Non data node
node.master: false #Non master node
node.attr.rack: r1 #Custom attribute, which is included in the official document
bootstrap.memory_lock: true #Lock memory when starting es
network.host: 172.17.0.5 #ip address of the current node
http.port: 9200 #Set the port number occupied by the current node, default 9200
discovery.seed_hosts: ["172.17.0.3:9300","172.17.0.4:9300","172.17.0.2:9300"] #When the current es node is started, it will go to this ip list to find other nodes. There is no need to configure the ip of its own node. Here, it supports the form of ip and ip:port. Without port number, the default is to use ip:9300 to discover nodes
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"] #As the initial node name of the master node, the triple node is not listed here
gateway.recover_after_nodes: 2 #Set the data recovery when N nodes in the cluster are started. The default value is 1. Optional
path.data: /path/to/path  #Data saving directory
path.logs: /path/to/path #Log save directory
transport.tcp.port: 9300 #Set the port for cluster node discovery

In previous versions, discovery.seed'hosts is called discovery.zen.ping.unicast.hosts
I found a complete configuration, but some of them are not used now, which is still of great reference value, Detailed description of elasticsearch configuration file

remaining problems

  1. Finally, during this test, the problem of brain crack was not considered. If necessary, please add and modify by yourself. For example, when using triple node as master, no data will be saved. For example, add a node with node.master: true to modify cluster.initial \\
  2. For the sake of simplicity, do not mount the data storage directory. Please do not mount it in the production environment

This is an original article, no reprint

Tags: ElasticSearch network Linux sudo

Posted on Mon, 13 Apr 2020 05:05:31 -0700 by agent007