Categories
dashboard elasticsearch elk install kibana ubuntu

Install Kibana In Simple Steps

Kibana is a great tool which provides power of dashboard to elasticsearch data. You can visualize and analyze your data using it. Installation of kibana is not that big deal until you get stuck. I got though. Here I present you the process to install kibana in simple steps.

Following process has been tested on Ubuntu 14.04.

1 – Download kibana of compatible version with your elasticsearch version from here.

Check the compatibility from below image.
install kibana

2 – Open terminal. Move in to a directory wherever you want to install kibana(/var/www here).
cd /var/www

3 – Extract the downloaded file, follow below command if downloaded file is the tar file.
tar -xvzf ~/Downloads/kibana-4.x.x -C ./ (downloaded file kibana-4.x.x is kept in Downloads folder)

4 – Move into the extracted kibana folder,
cd kibana-4.x.x

5 – Open the configuration file of kibana, which is kibana.yml.
sudo nano config/kibana.yml

6 – Now, modify it for your elasticsearch host, by default it is listening on the same host at port no. 9200. Make sure the elasticsearch host is already started.

7 – Now start kibana.
sudo bash bin/kibana (to run it in the background, put an `&` at the end, run sudo bash bin/kibana &)

8 – If everything went fine, you can check the kibana dashboard in your browser at http://localhost:5601
If something went wrong, the message for the same can be found at the terminal itself.
If in case it says port is already occupied, you need to kill the already running kibana process.
Find the process id (pid) by, ps aux | grep "kibana" and then kill that process, kill -9 respective_pid. Now try again to start kibana process using step 7.

That is all, for the steps to install kibana.

For any other query, reach me -> [email protected]

If you found the above blog interesting, check the installation of ELK(Elasticsearch, Logstash, Kibana) as well.

Categories
access-log aws elasticsearch elk filter grok grok-debugger gui input kibana log-format logging logrotate logstash monitoring nginx optimization output s3 webbrowser

ELK : Configure Elasticsearch, Logstash, Kibana to visualize and analyze the logs.

This is about how to configure elasticsearch, logstash, kibana together to visualize and analyze the logs. There are situations when you want to see whats happening on your server, so you switch on your access log. Of course you can tail -f from there or grep from that, but I tell you that it is real cumbersome to analyze the logs through that log file.

What if I told you there is a super cool tool, Kibana, which lets you analyze your logs, gives all kind of information in just clicks.   Kibana is a gui tool designed for the purpose to analyze the large log files which when used properly with logstash and elasticsearch(configure elasticsearch, logstash and kibana together) can be a boon for developers.

Now, logstash is a tool which is used to move logs from one file or storage(s3) to another. Elasticsearch is a search server which is used to store the logs in some format.

Now, here is the picture in short, Logstash will bring logs from s3, formatize them and send them to elasticsearch. Kibana will fetch logs from elasticsearch and show it to you. With that lets see the actual thing which matters, the code. I assume you everything installed, ELK, and s3 as the source of our logs(why?).

So first we configure our Logstash. There are mainly three blocks in logstash, input, filter, output. In input, we specify the source of the log file, in filter block, we format the logs the way we want it to be stored in elastic search, in output block we specify the destination for the output.

Code :

open up terminal

nano /etc/logstash/conf.d/logstash.conf

edit it for the following code,

input {

s3 {

bucket => “bucket_name_containing_logs”

credentials => [“ACCESS_KEY”, “SECRET_KEY”]

region_endpoint => “whichever_probably_us-east-1”

codec => json {

charset => “ISO-8859-1”

}

}

}

filter {

grok {

match => {“message” => “grok_pattern”}

}

}

output {

#stdout {

#codec =>json

#}

elasticsearch_http {

host => “localhost”

port => “9200”

codec => json {

charset => “ISO-8859-1”

}

}

}

Explanation :

In input block, we specify that our log comes from s3. We provide necessary credentials to access the s3 bucket. We specify the charset for the input.

In filter block, we use grok tool to create custom fields in kibana by making proper pattern for the input logs. You can use grokdebugger to debug your pattern for a given input.

In output block, we specify the destination for output as elasticsearch, its address. We also specify the charset for the output.

You can uncomment the stdout block in output block to print data on to console.

Elasticsearch

We don’t need to change anything for elasticsearch configuration for now. Though if curious you can find it at /etc/elasticsearch/elasticsearch.yml . One thing which we should keep in mind that we need high configuration machine for this ELK system, otherwise, you might encounter different errors when elasticsearch gets full. One workaround that can be done for that whenever your elasticsearch is full, clear it out.

The following command will remove all index and bring elasticsearch to its initial state.

curl -XDELETE ‘ http://localhost:9200/_all/ ‘

You can read here to optimize elastic search for memory.

Kibana

You don’t have to do much here in its configuration file, just make sure its listening to the correct port number. You can find its configuration file at /opt/kibana/config/kibana.yml

Now go ahead and enter the ip of the machine wherever the kibana is setup and port number or whatever url you specified in kibana.yml into the browser.

Now you can see something like thisconfigure elasticsearch

You can now explore a lot of things from here, visualize logs, compare fields. Go ahead check out different settings in kibana.

That’s it for now.

Welcome and do let me know when you configure elasticsearch, logstash, kibana combo pack.

Cheerio 🙂