Install ELK stack on CentOS 7 to centralize logs analytics

What is the ELK Stack?

ELK is an acronym from the first letter of three open-source products — Elasticsearch, Logstash, and Kibana— from Elastic. The 3 products are used collectively (though can be used separately) mainly for centralizing and visualizing logs from multiple servers (as much as you want).
  • Elasticsearch is basically a distributed,  NoSQL data store, that uses on the Lucene search capabilities.
  • Logstash is a log collection pipeline tool that accepts inputs from various sources (log forwarder), executes different filtering and formatting, and writes the data to Elasticsearch.
  • Kibana is a graphical-user-interface (GUI) for visualization of Elasticsearch data.
The ELK Stack is the most widely used log analytics solution, beating Splunk’s enterprise software, which had long been the market leader. The ELK Stack is downloaded 500,000 times every month, making it the world’s most popular log management platform. In contrast, Splunk — the historical leader in the space — self-reports 10,000 total customers.
This tutorial is a guide to set up ELK stack and Filebeat as log-forwarder to gather syslogs of a remote machine (or as many servers as you want).

The four main components we’ll be setting-up:
  • Elasticsearch 5.x:  Stores all of the logs.
  • Logstash 5.x: Processes the incoming logs for a log-forwarder i.e. Filebeat.
  • Kibana: GUI for searching and visualizing logs.
  • Filebeat (Log Forwarder is also an option): Installed on servers that will send their logs to Logstash.
We will install the first three components on a single server, which we will refer to as our ELK Server. The Filebeat will be installed on another machine i.e. ELK client, who’s logs we want to visualize.

Step 0 – Pre-installation tasks

disable SELinux
$ sudo set enforce 0
The above command will disable SELinux for the session i.e. until next reboot – to permanently disable it set SELINUX=disabled in /etc/selinux/config file.
Stop firewalld
As clients will need to connect to ELK server to send logs (port 5044).
$ sudo systemctl stop firewalld

Step 1 – Install Java

Usually Java comes installed on CentOS 7 (Everything), for CentOS 7 minimal you may need to install Java for various setups. On a CentOS 7 Everything, you can verify it by simply checking the version:
$ java -version
The output:
[nahmed@elk ~]$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-b15)
OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
If you don’t have Java installed, here’s a guide – Install Java 8 on CentOS/RHEL 7.x
 

Step 2 – Install & Configure ElasticSearch

Elasticsearch verification
[nahmed@elk opt]$ curl -X GET http://localhost:9200
{
"name" : "H5fcpdg",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Nub7-l2sRE-TNQPeH8JrZg",
"version" : {
"number" : "5.0.2",
"build_hash" : "f6b4951",
"build_date" : "2016-11-24T10:07:18.101Z",
"build_snapshot" : false,
"lucene_version" : "6.2.1"
},
"tagline" : "You Know, for Search"
}

Step 3 – Install & Configure Kibana

Download and install the RPM manually
The RPM for Kibana v5.0.2 can be downloaded from the website and installed as follows
For 64 bit
Download the Kibana rpm
$ wget https://artifacts.elastic.co/downloads/kibana/kibana-5.0.2-x86_64.rpm
Install Kibana
$ sudo rpm --install kibana-5.0.2-x86_64.rpm
Start and enable Kibana
$ systemctl daemon-reload
$ systemctl start kibana
$ systemctl enable kibana
It’ll start the Kibana as service, listening on port 5601 (by default).
The output:
[nahmed@elk opt]$ sudo wget https://artifacts.elastic.co/downloads/kibana/kibana-5.0.2-x86_64.rpm
--2016-12-14 03:20:28--  https://artifacts.elastic.co/downloads/kibana/kibana-5.0.2-x86_64.rpm
Resolving artifacts.elastic.co (artifacts.elastic.co)... 184.73.171.14, 204.236.217.108, 23.21.105.193, ...
Connecting to artifacts.elastic.co (artifacts.elastic.co)|184.73.171.14|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 40846391 (39M) [application/octet-stream]
Saving to: ‘kibana-5.0.2-x86_64.rpm’100%[=============================================================================================================>] 40,846,391   286KB/s   in 2m 39s2016-12-14 03:23:10 (250 KB/s) - ‘kibana-5.0.2-x86_64.rpm’ saved [40846391/40846391]
[nahmed@elk opt]$ sudo rpm --install kibana-5.0.2-x86_64.rpm[nahmed@elk kibana]$ sudo systemctl start kibana
[nahmed@elk kibana]$ sudo systemctl enable kibana
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.
For 32 bit
$ wget https://artifacts.elastic.co/downloads/kibana/kibana-5.0.2-i686.rpm
$ sudo rpm --install kibana-5.0.2-i686.rpm

Step 3.1 – Install Nginx (optional)

You can access your Kibana dashboard directly http://localhost:5601 – but in case you need to access your dashboard from a remote machine, we must setup a reverse proxy i.e. Nginx. With Nginx you can access the Kibana dashboard externally elk_server_ip (Nginx will be listening on the default 80 port). For test setup you can simply skip this step, and use Kibana directly.
Add the EPEL repository
$ sudo yum -y install epel-release
Install Nginx
$ sudo yum -y install nginx
Install httpd-tools
We’ll use to generate username and password pair for Kibana
$ sudo yum -y install httpd-tools
Create admin username and password
Use htpasswd to create an admin user, called “kibanaadmin” (or whatever you want to set), that’ll be required to access the Kibana web interface:
$ sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin
Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.
The output:
[nahmed@elk opt]$ sudo htpasswd -c /etc/nginx/conf.d/$(hostname -f).htpasswd kibanaadmin
New password:
Re-type new password:
Adding password for user kibanaadmin
Configure Nginx to serve kibana
Open the Nginx configuration file in editor of your choice (gedit, vi, vim):
$ sudo gedit /etc/nginx/nginx.conf
Remove the server block at the end (starts with server { ), the last configuration block in the file i.e. the last 2 conf lines in the file ‘d be:
include /etc/nginx/conf.d/*.conf;</div>
<div>}
Create a new kibana.conf in conf.d
sudo vi /etc/nginx/conf.d/kibana.conf
Paste the following lines into the file. Be sure to update the server_name to your server’s name and auth_basic_user_file to file path of your authentication file:
server {
listen 80;
server_name elk;auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/conf.d/elk.htpasswd;location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Save and exit. This configures Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the elk.htpasswd file, that we created earlier, and require basic authentication.
Start and enable Nginx
$ sudo systemctl start nginx
$ sudo systemctl enable nginx
The output:
[nahmed@elk ~]$ sudo systemctl start nginx
[nahmed@elk ~]$ sudo systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
Verification
Hit the http://localhost using any browser, it’ll prompt you for the kibana username:password, you set earlier.
Kibana login
Kibana login
After entering the kibanadmin as username and password, the following window ‘ll appear:
Kibana Dashboard
Kibana Dashboard
You’ll have the “Error – Index Patterns: Please specify a default index pattern“, appearing at the top of the page, which is fine by now.

Step 4 – Install LogStash

Add the logstash repo
Create logstash.repo at /etc/yum.repos.d/ and paste the following lines in it.
[logstash-5.x]
name=Elastic repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
To create and add the conf open file in gedit editor:
$ sudo gedit /etc/yum.repos.d/logstash.repo
Install logstash
$ sudo yum install logstash

Step 4.1 – Configuring logstash

Generate SSL certificates
Option 1 – Based on Private IP
Add a SSL certificate based on the IP address of the ELK server. Add the ELK server’s private IP in /etc/pki/tls/openssl.cnf.
$ sudo gedit /etc/pki/tls/openssl.cnf
Add the following line just below [ v3_ca ] section:
subjectAltName = IP: 192.168.40.188
Generate a self-signed certificate valid for 365 days
$ cd /etc/pki/tls
$ sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
The output:
[nahmed@elk tls]$ sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Generating a 2048 bit RSA private key
.........+++
..................................................................................................+++
writing new private key to 'private/logstash-forwarder.key'
-----
Option 2 – Based on domain (FQDN)
$ cd /etc/pki/tls
$ sudo openssl req -subj '/CN=ELK_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Logstash input, filter, output files
Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs – usually specified as separate files.
input.conf 
$ sudo vi /etc/logstash/conf.d/01-beats-input.conf
Insert the following lines into it. This is necessary for Logstash to “learn” how to process beats coming from clients. Make sure the path to the certificate and key match the right paths as outlined in the previous step:  This specifies that Logstash will listen on tcp port 5044 i.e. log-forwarder will connect at this port to send logs.
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter.conf
$ sudo vi /etc/logstash/conf.d/01-beats-filter.conf
This filter looks for logs that are labeled as “syslog” type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output.conf
$ sudo vi /etc/logstash/conf.d/01-beats-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
In output.conf basically we configured for Logstash to store the beats data in Elasticsearch which is running at localhost:9200, in an index named after the beat used (filebeat, in our case).
Start and enable logstash
$ sudo systemctl daemon-reload
$ sudo systemctl start logstash
$ sudo systemctl enable logstash

Step 5 – Install Filebeat (on the Client Servers)

We will show you how to do this for Client #1 (repeat for Client #2 afterwards, changing paths if applicable to your distribution).
Step 5.1 – Copy the SSL certificate from the ELK server to the client(s)
$ sudo scp /etc/pki/tls/certs/logstash-forwarder.crt root@192.168.40.175:/etc/pki/tls/certs/
The output:
[nahmed@elk tls]$ sudo scp /etc/pki/tls/certs/logstash-forwarder.crt root@192.168.40.175:/etc/pki/tls/certs/
The authenticity of host '192.168.40.175 (192.168.40.175)' can't be established.
ECDSA key fingerprint is 42:81:19:1a:4f:84:cb:37:81:e8:8c:dd:8f:ac:7f:ff.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.40.175' (ECDSA) to the list of known hosts.
root@192.168.40.175's password:
logstash-forwarder.crt                                                                                               100% 1241     1.2KB/s   00:00
Note: Perform the following steps 5.2, 5.3, 5.4, 5.5 and 5.6 on the client machine (ones sending logs to ELK server)
Step 5.2 – Import the Elasticsearch public GPG key
$ sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
Step 5.3 – Create and edit a new yum repository file for Filebeat
$ sudo vi /etc/yum.repos.d/filebeat.repo
Add the following repository configuration:
[filebeat]
name=Filebeat for ELK clients
baseurl=https://packages.elastic.co/beats/yum/el/$basearch
enabled=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
gpgcheck=1
Step 5.4 Install the Filebeat package
$ sudo yum -y  install filebeat
Step 5.5 – Configure Filebeat
Edit Filebeat configuration file
$ sudo vi /etc/filebeat/filebeat.yml
filebeat:
prospectors:
-
paths:
- /var/log/secure
- /var/log/messages
#  - /var/log/*.loginput_type: logdocument_type: syslogregistry_file: /var/lib/filebeat/registryoutput:
logstash:
hosts: ["elk_server_private_ip:5044"]
bulk_max_size: 1024tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]shipper:logging:
files:
rotateeverybytes: 10485760 # = 10MB
Replace ‘elk_server_private_ip‘ with the private IP of your ELK server i.e. hosts: [“192.168.40.188:5044”]
Note: Filebeat’s configuration file is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces that are indicated in these instructions.
Step 5.6 – Start and enable Filebeat
$ systemctl start filebeat
$ systemctl enable filebeat

Step 6.1 – Test Filebeat

If all the installation has gone fine, the Filebeat should be pushing logs from the specified files to the ELK server. Filebeat (probably running on a client machine) sends data to Logstash, which will load it into the Elasticsearch in a specified format (01-beat-filter.conf).
On your ELK Server, verify that Elasticsearch is indeed receiving the data by querying for the Filebeat index with this command:
$ curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
You may get a large output, the first few lines should be like:
{
"took" : 26,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 7569,
"max_score" : 1.0,
"hits" : [
{
"_index" : "filebeat-2016.12.15 ",
"_type" : "syslog",
"_id" : "AVkCsyC0Q3qdgdV460Aq",
"_score" : 1.0,
"_source":{"message" : "RpcIn: sending 67 bytes", "@version":"1","@timestamp" : "2016-12-15T12:37:47.194Z","beat":{"hostname" : "localhost.localdomain","name" : "localhost.localdomain" "},"count":1,"fields":null,"input_type":"log","offset" : 28099,"source" : "/var/log/vmware-tools-upgrader.log","type" : "syslog","host" : "localhost.localdomain"}}
....
....

Step 6.2 – Using Kibana

Enter filebeat-* in the Index name or pattern field and then click Create:
Adding index in Kibana
Adding index in Kibana
Set filebeat-* as a default index – otherwise you’ll keep getting the warning
Default Kibana index
Setting default index
Finally, in the ‘Discover’ on the left-menu, the next view will allow you to visualize by adding the fields, simply go to a field and click ‘add’ (will appear once you’ll hover) a:
Kibana log view
By default, Kibana will display the records that were processed during the last 15 minutes (see upper right corner) but you can change that behavior by selecting another time frame:
Kibana log time duration


Common issues

SSL client failed to connect

Filebeat (log-forwarder) may not be able to send logs to ELK server (Logstash), one probable reason can be improper SSL configuration i.e. you may find the following error in the filebeat logs or simply using journalctl -xe command.

 /usr/bin/filebeat[14653]:transport.go:125: SSL client failed to connect with: read tcp 192.168.40.175:59360->192.168.40.188:5044: i/o timeout

Solution:

  • Verify if SELinux is disabled.
  • Update you certificates i.e. generate new certificates.
  • Sync the times on the ELK server and the clients – Syncing date and time using ntpd.
  • Final resort can be disabling the SSL on logstash – comment/remove the ‘ssl => true‘ in the Logstash’s input.conf (01-beats-input.conf in our case).
no living connections in the connection pool

If you see the following error in our Logstash’s logs

[2016-12-15T12:59:16,489][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64}

Solution:

Comment/remove the ‘sniffing => true‘ line in the Logstash’s output conf (01-beats-output.conf in our case). Restart Logstash service:

$ sudo systemctl restart logstash

Leave a Reply

Your email address will not be published. Required fields are marked *