ELK is an acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.

Elasticsearch requires Java to be installed on the server in order to function correctly. You can use the steps outlined in this guide to install Java on Ubuntu 16.04/18.04.

Deploying your cloud server
If you have not already registered with Cloudwafer, you should begin by getting signed up. Take a moment to create an account after which you can easily deploy your own cloud servers.

Once you have signed up, log into your Cloudwafer Client Area with the password provided in your mail and deploy your Cloudwafer cloud server.

Updating System Packages
It is always recommended that you update the system to the latest packages before beginning any major installations. This is done with the command below:

sudo apt-get update && sudo apt-get upgrade

Step 1: Configure ELK repository
Before we begin our installation, we need to set up the ELK stack repository from which our individual packages would be sourced. The official repository from Elastic CO would be used here as shown below:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

Step 2: Install Elasticsearch server
Issue the command below to install Elasticsearch the server:

sudo apt install -y elasticsearch

After our installation, issue the commands below to start the elasticsearch server and enable it to start on boot:

sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch

Next, give it a couple of minutes for the elasticsearch to complete initialization, then issue the command below to view the status of Elasticsearch REST interface.

curl -X GET http://localhost:9200

Step 3: Install Logstash
Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously transforms it, and then sends it to our “stash.” (Elasticsearch in this case). Issue the command below to install Logstash:

sudo apt install logstash

Step 3.1: Create SSL certificate for Logstash (Optional)
For secure transmission of logs, we can install SSL certificate for Logstash either for the hostname or IP.

  • Option 1: (Hostname or domain name): Navigate to the OpenSSL directory using the command below

    cd /etc/ssl/
    

    Next, create the SSL certificate using OpenSSL replace cloudwaferlabs.com.ng with the hostname of your Logstash server using the command below:

    sudo openssl req -x509 -nodes -newkey rsa:2048 -days 365 -keyout logstash-forwarder.key -out logstash-forwarder.crt -subj /CN=cloudwaferlabs.com.ng
    

Note that the generated logstash-forwarder.crt should be copied to all client servers which will be sending logs to our logstash server.

  • Option 2: (Hostname or domain name): First, we need to add the IP address of our Logstash server to SubjectAltName in the OpenSSL configuration file as shown below:

    sudo nano /etc/ssl/openssl.cnf

Using Ctrl + W in nano, search for subjectAltName and insert the IP Address of your Logstash server.

    subjectAltName = IP:***.**.***.***

Next, navigate to the OpenSSL directory and create the SSL certificate as shown below:

cd /etc/ssl/
sudo openssl req -x509 -days 365 -batch -nodes -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt

Note that the generated logstash-forwarder.crt should be copied to all client servers which will be sending logs to our logstash server.

Logstash uses the PKCS8 format for SSL, hence, we need to convert the current key to that format and also modifying the permissions. Issue the commands below to do this:

sudo openssl pkcs8 -in logstash-forwarder.key  -topk8 -nocrypt -out logstash-forwarder.key.pem
sudo chmod 644 /etc/ssl/logstash-forwarder.key.pem

Step 3.2: Configure Logstash
Create a configuration file in the /etc/logstash/conf.d/ directory for Logstash to listen on port 5044 for incoming logs and add the SSL certificate details as shown below:

sudo nano /etc/logstash/conf.d/logstash.conf

Insert the following in the file:

input {
  beats {
   port => 5044

    # Set to False if you do not SSL
    ssl => true

    # Delete below lines if no SSL is used
    ssl_certificate => "/etc/ssl/logstash-forwarder.crt"
    ssl_key => "/etc/ssl/logstash-forwarder.key.pem"
    }
 }

 filter {
 if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGLINE}" }
    }

     date {
 match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
 }
   }

 }


 output {
  elasticsearch {
  hosts => localhost
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
       }
stdout {
    codec => rubydebug
       }
 }

Ensure to save the file before you exit.

Next, restart and Logstash using the commands below:

sudo systemctl restart logstash
sudo systemctl enable logstash

We can view our Logstash logs using the command below:

 sudo cat /var/log/logstash/logstash-plain.log

Step 4: Install Kibana
Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data. Issue the command below to install Kibana:

sudo apt install -y kibana

Kibana only listens on the localhost by default, which means the web interface can't be accessed from external machines. However, we can modify this by following the next steps.

sudo nano /etc/kibana/kibana.yml

Add your server's IP address as shown below:

server.host: "***.**.***.***"

Elasticsearch and Kibana might be on different machines, hence, you would need to update the below line with the IP address of your Elasticsearch server.

 elasticsearch.url: "http://localhost:9200"

Next, start and enable Kibana on boot using the command below:

sudo systemctl restart kibana
sudo systemctl enable kibana

Step 5: Install Filebeat
Beats is the platform for single-purpose data shippers. They send data from hundreds or thousands of machines and systems to Logstash or Elasticsearch.

Filebeat comes with internal modules (auditd, Apache, NGINX, System, MySQL, and more) that simplify the collection, parsing, and visualization of common log formats down to a single command.

First, install HTTPS support for apt.

 sudo apt install -y apt-transport-https

Next, setup the Elastic repository for Filebeat:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

We can now install Filebeat using the following command:

sudo apt update
sudo apt install -y filebeat

You can read more on installing and upgrading ELk stack from the official documentation