Category Archives: Home Lab

Back to the Lab: ElasticSearch Elastic HQ Plugin

Obviously, Paramedic in my previous post is one of my favorite as it relates to the overview of the cluster. However, it is limited in the fact that its only live data coming in and limited to the statistics at a very basic level. I started looking into things that could provide more depth on monitoring and stats.

After some extensive searching and trying to find something a little more from a health and feature perspective I landed on ElasticHQ.

Elastic HQ – http://www.elastichq.org – boasts many features. I am a fan of companies that can have a little fun with their audience. Elastic HQ does that on their features page. They give you the non-geek version http://www.elastichq.org/features.html – or for the nerdier folks that want to geek out – http://www.elastichq.org/feature_list.html.

Let’s keep this non-geek friendly.

First, the install Elastic HQ has a few options:
1. In the Cloud – It’s free, no software installs, and Easy-to-Use. This option may work for most home labs, however, if you have a home lab, then you can afford to install this on your elastic search nodes. It’s another lightweight plugin. My opinion is not to use this method.

2. As a Plugin – Easy to install, no firewall issues, and it’s secured behind your firewall. Just like Paramedic, this is the easiest to install with the built-in plugin installer. It’s as simple as running the following:

/bin/plugin -install royrusso/elasticsearch-HQ.

Once the install completes, you navigate to http://domain:port/_plugin/hq/, then you’re able to hit connect in the top left corner, and it will connect to the cluster.

Features that I leverage in the home lab are pretty limited to mostly the diagnostics Elastic HQ provides.

The Cluster Overview page is the landing page once you connect. The major components you will leverage on this page are highlighted in red. If you have a node in your cluster down and out, then you will see Initializing Shards and Unassigned Shards numbers growing. These stats are telling you that ElasticSearch is having issues distributing the shards to additional nodes meaning your cluster is having issues. I use this all the time. For instance when a service dies because the default java heap space isn’t very large and it crashes the service. Or when a node is rebooting, and at a glance, you can quickly tell when the nodes are back and ElasticSearch as distributed the shards properly.

cluster-overview

The Node Diagnostics Information page is a wealth of information. This page is my got to for a health check. If a node is having issues, you can stop in on this page and diagnose exactly why. If you hover over a statistic, it shows you how it’s getting that number. It’s perfect for learning what some of the best practices are. Maybe the reason your response time is slow is due to disk performance? Maybe the latency is too high, because of the block size that is was written to the underlying media?

node-diags

node-diags-bestpractices

I also leverage the Indices section. This page allows me to access index specific information as well as some basic index management and cleanup. I have used the Administration page to remove and or delete an index that is having issues.

index-diag

index-admin

This plugin is another must have in my opinion in a home lab or the enterprise.
Thanks

JL

Back to the Lab: ElasticSearch Paramedic Plugin

It has been a while since I have updated my site and realized that I stopped half way through my Home Lab elastic search build. With that, let’s jump right back into it.

Must Have Plugins!

ElasticSearch is great because of the many plugins that can extend the features and functionality of elastic search. Many of them are rather small regarding management and use. I will list a few of the ones I use on a regular basis.

Paramedic – https://github.com/karmi/elasticsearch-paramedic – as far as I am concerned this has to be one of the top picks for me. There are many reasons for this the first being the user interface. It happens to be written in JavaScript and is very light weight. Just install using the built in plugin installer and your off to the races.

‘bin/plugin install karmi/elasticsearch-paramedic’

Once installed navigate to the following: http://localhost:9200/_plugin/paramedic/index.html

Thoughts:
Stats section is live and can set the refresh cycle. The downside of stats is when you highlight a stat to get the metrics it can be difficult to read the results. Another downside is that it’s not historical – it only has live stats.

 

stats-sectionAs you can see in the image to the above, when highlighting a particular metric it doesn’t render the information in a helpful manner.  I think it would be better if they moved the stats to the far right where there is some additional white space.

The Nodes section is where you can see live stats on how the cluster is doing, and which nodes are under heavy load.  In my home lab, I have 4 nodes total 3 of which are obviously data nodes and one is a master node.

nodes-section

My favorite feature of Paramedic is the Indices section – this displays the status of replicas, the health of the indices, and the state of the indices. I cannot tell you the number of times I have used this to figure out which index is corrupt or having issues.

indicies

Overall this is hands down one of my all time favorites, because of the lightweight nature of the interface and ease of use. The interface is easy on the eyes minus graphical issues I am not a fan of, but other than that pretty useful in maintaining your cluster. The lightweight nature of the interface makes it a must have for home labs and enterprises alike.

Thanks.

JL

Home Lab Dashboard Part 3: Setting up Logstash

Now when it comes to best practices, I was debating on whether or not to setup a dedicated logstash server.  I feel as though the CPU load will creep up and doing triple duty of elastic search, kibana, and logstash could cause some serious load.  If i need to change that later I will.  Logstash will be pretty simple as it uses config files to enable the ingest.  Easy enough to move them later.

I am also installing this as a service as it will be much easier to manage.  Keeping everything consistent I plan on using the repo method of install as well.

The repo install guide can be found here.

First step is downloading the public signing key.

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

Create a repo file in the yum.repos.d folder.

vi /etc/yum.repos.d/logstash.repo

Add the following:

[logstash-2.3]
name=Logstash repository for 2.3.x packages
baseurl=https://packages.elastic.co/logstash/2.3/centos
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

Once that file is saved then we can go ahead and run the yum installer.

yum install logstash

The final two components are starting the service and configuring it to start on reboot.

service logstash start

chkconfig logstash on

Thats pretty much it from this end the next post we will walk through is actually getting the data and start getting it into an index.  Once the data is in an index we can then start carving it up.

Home Lab Dashboard Part 2: Setting up Kibana

Kibana is the data visualization for elastic search allowing you to search particular data sets and build dashboards.

I have been pretty impressed with the Kibana UI it allows for easily being able to carve up just about any data.  Most of the visualizations are pretty self explanatory once you get into it a little bit.  We will go over those in more details later as we start building the actual dashboard.  For now lets get Kibana installed:

Just like the initial install of elastic search install I will leverage the yum repos and disable them after setup is complete.

The repo install for Centos can be found here.

Continue reading

Home Lab Dashboard Part 1: Setting up Elastic Search

The first part was setting up an elastic search server.  I thought about creating a “cluster”, however most of the home labs won’t need that level of redundancy.  The first thing I need to do is setup elastic search and the components I will use to set this up.

The components we plan on using:

  • Elastic Search – open source search and analytics engine.  Designed to scale horizontally, reliable, and easy to manage.
  • Kibana – is the data visualization  component that will help build the dashboard, this is easy to add to the search engine.
  • LogStash – is utilized to collect the log from multiple inputs and add them to the engine and organize as you see fit.

There are some other components like beats and what not that I will add later as of right now this is pretty much the list.

Continue reading