Home General Topics Kibana :: Installation and Setup

Kibana :: Installation and Setup

by Bella

Kibana is an open-source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

You can set up Kibana and start exploring your Elasticsearch indices in minutes. All you need is:

* Elasticsearch 2.3 or later
* A modern web browser – Supported Browsers.

Installing Elasticsearch 2.3
=====================

Add below contents in file /etc/yum.repos.d/elasticsearch.repo

[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

Install Elasticsearch Using YUM
========================

yum install elasticsearch -y

Prerequisites
==========

We have three server with following IPs and host names. All server have full access to each other server using IP and hostname both.

192.168.0.XX node1
192.168.0.YY node2
192.168.0.ZZ node3

ElasticSearch can be installed on almost Linux distributions which can run Java. we are using CentOS 7 x64 and Open JDK 1.8 to run ElasticSearch.

Install Java
=========

You can use Oracle JDK or OpenJDK, both of them are fine. CentOS 7 repository has OpenJDK 1.8, we can install with following command.

yum install -y java

Verify Installed Java version
======================

java -version
openjdk version “1.8.0_65”
OpenJDK Runtime Environment (build 1.8.0_65-b17)
OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)

Configuring ElasticSearch cluster
=================================

The Elasticsearch configuration files are in the /etc/elasticsearch directory. There are two files:

elasticsearch.yml: Configures the Elasticsearch server settings. This is where all options, except those for logging, are stored, which is why we are mostly interested in this file.

logging.yml: Provides configuration for logging. You can keep it default and find the resulting logs in /var/log/elasticsearch.

Edit /etc/elasticsearch/elasticsearch.yml and add the following into each node.

Node1
—–

nano -w /etc/elasticsearch/elasticsearch.yml

###
EDIT CONFIG
###

node.name: node1
cluster.name: cluster1
node.master: true
network.host: 0.0.0.0
node.data: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: [“node1”, “node2”, “node3”]
index.number_of_shards: 5
index.number_of_replicas: 1

###
END CONFIG
###

Node2
—–

nano -w /etc/elasticsearch/elasticsearch.yml

###
EDIT CONFIG
###

node.name: node2
cluster.name: cluster1
node.master: false
network.host: 0.0.0.0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: [“node1”, “node2”, “node3”]
node.data: true
index.number_of_shards: 5
index.number_of_replicas: 1

###
END CONFIG
###

Node3
—–

nano -w /etc/elasticsearch/elasticsearch.yml

###
EDIT CONFIG
###

node.name: node3
cluster.name: cluster1
node.master: false
network.host: 0.0.0.0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: [“node1”, “node2”, “node3”]
node.data: true
index.number_of_shards: 5
index.number_of_replicas: 1

###
END CONFIG
###

node.name: The name of our ElasticSearch node which will be used to show in the cluster. If we don’t put any name, it will use the server hostname.
cluster.name: The ElasticSearch cluster name. Nodes have same cluster.name will be in the same cluster.
network.host: To make sure that ElasticSearch nodes can communicate with each other. ElastichSearch binds to loopback interface and listens on port 9200 & 9300 by default. Set network.host: 0.0.0.0 to make it listen on all interface.
discovery.zen.ping.unicast.hosts: For some reason, if your nodes can’t join the cluster automatically, let’s try to use Unicast as discovery method for the cluster.
node.master: The setting which determines the role of the server is called node.master. If you have only one Elasticsearch node, you should leave this option commented out so that it keeps its default value of true. Commented out on the master server and made the following change on both the slave servers (false)
node.data: node.data, which determines whether a node will store data or not. In most cases this option should be left to its default value (true), but there are two cases in which you might wish not to store data on a node. One is when the node is a dedicated “master,” as we have already mentioned. The other is when a node is used only for fetching data from nodes and aggregating results. In the latter case the node will act up as a “search load balancer”.

Start our ElasticSearch Nodes by Following Command
=======================================

/etc/init.d/elasticsearch start

Verify ElasticSearch Node Status
=======================

curl localhost:9200

We should get output like below:

{
“name” : “node1”,
“cluster_name” : “cluster1”,
“version” : {
“number” : “2.1.1”,
“build_hash” : “40e2c53a6b6c2972b3d13846e450e66f4375bd71”,
“build_timestamp” : “2015-12-15T13:05:55Z”,
“build_snapshot” : false,
“lucene_version” : “5.3.1”
},
“tagline” : “You Know, for Search”
}

Verify ElasticSearch Cluster Status
=========================

curl -XGET ‘http://localhost:9200/_cluster/health?pretty=true’

We should get output like below:

{
“cluster_name” : “cluster1”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 3,
“number_of_data_nodes” : 3,
“active_primary_shards” : 0,
“active_shards” : 0,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}

The cluster status is “green” which means everything is good. There are 3 status we may have:

green: Great. Your cluster is fully operational. Elasticsearch is able to allocate all shards and replicas to machines within the cluster.
yellow: Elasticsearch has allocated all of the primary shards, but some/all of the replicas have not been allocated.
red: This is really bad. Some or all of (primary) shards are not ready.

Install Kibana
===========

Download and install the public signing key:
==================================

rpm –import https://packages.elastic.co/GPG-KEY-elasticsearch

Create a file named kibana.repo in the /etc/yum.repos.d/ directory with the following contents:

[kibana-4.5]
name=Kibana repository for 4.5.x packages
baseurl=http://packages.elastic.co/kibana/4.5/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

Install Kibana by Running the Following Command:
======================================

yum install kibana

Configure Kibana to automatically start during bootup. If your distribution is using the System V version of init (check with ps -p 1), run the following command:

chkconfig –add kibana

Start service by

/etc/init.d/kibana start
Verify Kibana status
=============

You can access Kibana UI by following:

For example, localhost:5601 or http://YOURDOMAIN.com:5601 or http://serverip:5601

( Screenshots available )

Kibana User Interface

 

Kibana Dashboard

If you require help, contact SupportPRO Server Admin

Server not running properly? Get A FREE Server Checkup By Expert Server Admins - $125 Value

Leave a Comment

CONTACT US

Sales and Support

Phone: 1-(847) 607-6123
Fax: 1-(847)-620-0626
Sales: sales@supportpro.com
Support: clients@supportpro.com
Skype ID: sales_supportpro

Postal Address

1020 Milwaukee Ave, #245,
Deerfield, IL-60015
USA

©2022  SupportPRO.com. All Rights Reserved