Setup and configure a three-node Elasticsearch Cluster on Ubuntu Server :

ADDI Kamal
5 min readDec 6, 2021

--

elasticsearch installation :

Download and install the Elasticsearch public signing key :

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Install dependencies :

sudo apt-get install apt-transport-https

Save the repository definition :

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list

Install the Elasticsearch package :

sudo apt-get update && sudo apt-get install elasticsearch

Edit the Elasticsearch configuration :

sudo nano /etc/elasticsearch/elasticsearch.yml
  • elasticsearch1 :

/etc/elasticsearch/elasticsearch.yml configuration :

  • elasticsearch2 :

/etc/elasticsearch/elasticsearch.yml configuration :

  • elasticsearch3 :

/etc/elasticsearch/elasticsearch.yml configuration :

Start Elasticsearch Master Node-1 :

sudo service elasticsearch start 

Now we can check that elasticsearch is running in the master :

sudo service elasticsearch status

Start Elasticsearch Slave Node-2 :

sudo service elasticsearch start

Now we can check that elasticsearch is running in the node-2 :

sudo service elasticsearch status

Start Elasticsearch Slave Node-3 :

sudo service elasticsearch start

Now we can check that elasticsearch is running in the node-3 :

sudo service elasticsearch status

Check the default response and cluster health for any of our machines :

curl -XGET 'http://localhost:9200/?pretty'

We should get the following output:

curl -XGET 'http://localhost:9200/_cluster/health?pretty'

We should get the following output:

It should show that number_of_nodes is > 1 and as we have 3 nodes in total, it should say number_of_nodes : 3 in each machine.

To see the state of the master node in the cluster :

curl -XGET 'http://localhost:9200/_cluster/state/master_node?pretty'

We should get the following output:

and we check if the cluster_uuid matches the cluster_uuid on the master node that we started first.

To see a list of node UUIDs that are active in the cluster :

addihossam@elasticsearch1:~$ curl -XGET 'http://localhost:9200/_cluster/state/nodes?pretty'       
{
"cluster_name" : "es-cluster",
"cluster_uuid" : "8FGqDQHFQS2xXLqn--Cd7Q",
"nodes" : {
"lmSTBa7CQmeeSmDjtmFn9w" : {
"name" : "node-1",
"ephemeral_id" : "3F_PSnSrT8e0DAvTvymtWA",
"transport_address" : "10.5.3.52:9300",
"attributes" : {
"ml.machine_memory" : "4136300544",
"xpack.installed" : "true",
"transform.node" : "true",
"ml.max_open_jobs" : "512",
"ml.max_jvm_size" : "2067791872"
},
"roles" : [
"data",
"data_cold",
"data_content",
"data_frozen",
"data_hot",
"data_warm",
"ingest",
"master",
"ml",
"remote_cluster_client",
"transform"
]
},
"zVRHY4pdTz6kBPu4HzzPxg" : {
"name" : "node-2",
"ephemeral_id" : "hJkwl3_aTO2U9NLlU_TZng",
"transport_address" : "10.5.1.184:9300",
"attributes" : {
"ml.machine_memory" : "4136284160",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "2067791872",
"transform.node" : "true"
},
"roles" : [
"data",
"data_cold",
"data_content",
"data_frozen",
"data_hot",
"data_warm",
"ingest",
"master",
"ml",
"remote_cluster_client",
"transform"
]
},
"W31B2PJrQsWHkV2rHXqFwQ" : {
"name" : "node-3",
"ephemeral_id" : "Aavq3hGMQ0uEATwp-Lzb_g",
"transport_address" : "10.5.4.247:9300",
"attributes" : {
"ml.machine_memory" : "4136284160",
"ml.max_open_jobs" : "512",
"xpack.installed" : "true",
"ml.max_jvm_size" : "2067791872",
"transform.node" : "true"
},
"roles" : [
"data",
"data_cold",
"data_content",
"data_frozen",
"data_hot",
"data_warm",
"ingest",
"master",
"ml",
"remote_cluster_client",
"transform"
]
}
}
}

Install and Configure Logstash :

We can install logstash easily with the following command:

apt-get install logstash -y

Once the Logstash is installed, we will need to configure the input, filter, and the output plugins. we can configure it by creating a new configuration file inside /etc/logstash/conf.d/ directory:

#Specify listening port for incoming logs from the beats

input {
beats {
port => 5044
}
}

# Used to parse syslog messages and send it to Elasticsearch for storing

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

# Specify an Elastisearch instance

output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
}
}

Save and close the file then start the Logstash and check its status :

sudo service logstash start 
sudo service logstash status

Install and Configure Kibana :

We can install kibana with the following command:

apt-get install kibana -y

By default, Kibana listens on localhost. So we will need to configure it for external access.

We can configure it by editing the file /etc/kibana/kibana.yml :

Change the following lines :

Save and close the file then start the Logstash and check its status :

sudo service kibana start
sudo service kibana status

Now that we are sure that our cluster is working, we are going to access kibana from our browser using the IP address of the master node and the port 5601 : http://10.5.3.52:5601/app/home#/

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

ADDI Kamal
ADDI Kamal

Written by ADDI Kamal

Hi there, I’m Kamal, a data engineering student at the National Institute of Posts and Telecommunications. passionate about data science, big data, and IA.

No responses yet

Write a response