ELK server setup


Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 569 of /homepages/46/d762693627/htdocs/dc/includes/menu.inc).
ELK server setup



An ELK server has been set up for capturing Phoenix logs and letting us view them with Kibana. Logstash-forwarder is set up on the Phoenix servers that we want to capture logs on. This sends the logs to Logstash on the ELK server, which puts them into ElasticSearch so as to be viewable using Kibana.


Kibana dashboards let us view time-based log entries for events happening on Phoenix servers. These can be include anything we want, including errors, cache clears and site activity.

This example shows the frequency of order saves, node saves and orders placed on the front end over time:

It is possible to zoom in on particular time windows to example more closely the events during that time. To do this, just drag a selection over any graph.

An general overview dashboard has been set up here: http://logs.atdtravel.com/#/dashboard/elasticsearch/phoenix

However I have noticed that large dashboards (with many queries and panels) can be slower than small ones, so when investigating a specific issue, it may be useful to create new, dedicated dashboards containing only the required queries and panels.

Technical setup


The server setup is as follows:

Phoenix servers and Logstash-forwarder

Apps on the Phoenix servers, including Apache, php, git commits, Drupal (watchdog) and the Phoenix code itself, all write text logs to files in /var/log/. These are then forwarded to logstash on the ELK server by Logstash-forwarder.

Logstash-forwarder on the Phoenix servers is configured with the file /opt/logstash-forwarder/conf/logstash-forwarder.conf and the syntax is very simple. After making changes, restart Logstash-forwarder with "sudo /etc/init.d/logstash-forwarder restart". You check that it's working using "/etc/init.d/logstash-forwarder status" and by looking at the log file /var/log/logstash-forwarder, which logs each event sent to logstash on the ELK server.

The VM-based web servers (currently web1-web6) all use the Logstash-forwarder.conf file from config/logstash-forwarder/ in our git repository. The config on the physical servers w4 and db1 is a bit more complicated: since the OS on w4 is too old to install Logstash-forwarder on, the directory /var/log/ is shared out to db1 over NFS and Logstash-forwarder on db1 is set up to forward the logs for both w4 and db1.

Connections from Logstash-forwarder to Logstash require the use of SSL certificates linked to the IP address of the server. When the IP address changes, we need to update the certificates and deploy them on each server using Logstash-forwarder. See Jira-3685 for details on how this is done. When the certificates are updated, they should be updated in git too, in config/logstash-forwarder/.

ELK server: Logstash

Logstash parses incoming log records from Logstash-forwarder and stores the records in daily indexes in ElasticSearch. This lets us extract meaningful information from each log entry line.

This parsing is done using grok and the patterns we are using are defined in /etc/logstash/patterns.d/atd_custom on the ELK server. When writing new patterns, it's useful to use this online grok tester and be aware of the predefined patterns.

Log entries not matching a grok pattern are still sent to ElasticSearch and are visible in Kibana, but the entire log line is indexed, rather than it being broken down into more useful pieces of data.

The main config for file Logstash is /etc/logstash/conf.d/atd.conf on the ELK server and after making changes you need to restart Logstash with "sudo /etc/init.d/logstash restart".

ELK server: ElasticSearch

ElasticSearch holds all the data and makes it quickly searchable. 

Sometimes ElasticSearch crashes, in which case Kibana will display an error message. ElasticSearch can be restarted by running "sudo /etc/init.d/elasticsearch restart" on the ELK server.

ElasticSearch has a REST API and can be queried from the command line using curl. This command lists the indexes (collections of data) on the server: "curl http://localhost:9200/_stats/indexes?pretty=1".

ELK server: Kibana

Kibana is the tool used to view and visualise (make graphs from) the data. It can be accessed at http://logs.atdtravel.com using the login details above.

Data is searched using queries and the results can then be filtered. When setting up dashboards it's possible to "pin" queries, at which point you give them a name and refer to them in panels (data visualisations). 

Big dashboards can get slow, so when trying to solve particular issues, it might be worth setting up dedicated dashboards with only the required queries and panels.


See the Servers page for the latest server details.

If the servers are not displaying on the Kibana page, it might be possible that logstash forwarder cannot access them in order to forward them. Check that the permission on the httpd folder is as follows:

chmod 755 /var/log/httpd


Sometimes, Kibana fails to load. The logging server is currently a little under resourced. It is having to deal with a lot of log messages. If this is the case, it will often be fixed if you restart elasticsearch.

sudo /etc/init.d/elasticsearch restart

If a problem persists and you cannot get the service backup, it might be necessary to reboot the server.


I have added a script to check whether a service is running or not. If the service is not running, the script should attempt to restart it. This has been added to crontab to check the logstash and elasticsearch services. The script requires at least one argument. However, I noticed that elasticsearch can still have a process but it is marked as 'dead'. This is a bizarre situation so I've tried to code in a solution that checks the status message of the job and if it is 'dead'. the process is removed before the service is restarted. I can't simulate this scenario so we'll have to wait and see whether the script keeps the services up and running.

# Logstash service check.
0 * * * * /bin/bash /root/scripts/service_running.sh logstash /var/run.logstash.pid

# Elasticsearch service check.
0 * * * * /bin/bash /root/scripts/service_running.sh elasticsearch /var/run/elasticsearch/elasticsearch.pid

Related links


For Installation you can use the vagrant script, Virtual Box name is logger.

After installing the Logger Box you have to rebuild your d6phx box, because we are going to install logstash-forwarder on it.

We are using the logstash 1.4.2 and for elastic search we are using version 1.1.1

To view kibana :

To view Elastic search

Useful Links:






Debug Logstash forwarder
To test whether Logstash-forwarder is working or not. Use:

sudo service logstash-forwarder stop (to stop the logstash forwarder)

sudo /opt/logstash-forwarder/bin/logstash-forwarder.sh -config /etc/logstash-forwarder 

The above command will start the logstash forwarder and will display any errors if there in logstash-forwarder file.

This command will give you error if it is not able to connect to logstash on the logger box.

In case of success this command will outputs the logs received in terminal.

Both logstash and logstash-forwarder should be on same port and both should be up, so they can communicate with each other. In case if they still doesnt work restart both the boxes.


Debug Logstash

Logstash is installed on the Logger box.

Test whether logstash is working or not use

sudo /opt/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
Enter some thing like Hello world and will output as 
2013-11-21T01:22:14.405+0000 hello world

This will verify that logstash is working of it own.

What if the command above doesn't work?

If you receive the error message like below:

- LoadError: Could not load FFI Provider: (NotImplementedError) FFI not available: null

Here is the solution to it. You need to find the location of logstash and probably update with the right version name. Also update with the path obviously.

- cd logstash-1.4.2/bin
- vim logstash.lib.sh
- add 'JAVA_OPTS="$JAVA_OPTS -Djava.io.tmpdir=/path/to/somewhere"
- mkdir /path/to/somewhere
- start logstash


Now to test whether logstash is pushing the logs to elasticsearch or not use:

sudo /opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { host =>"" } }’

Enter something like Hello world

Above command will add the log to the elasticsearch. you can check whether the log is added or not by using:

curl ''

This command will output number of hits. Hits element in array will keep on incrementing if we use the previous command.

You can also test whether the logs are going to elasticsearch from browser by using


Debug Elasticsearch

Debugging elasticsearch from terminal use.

curl -X PUT -d '{

  "settings": {

    "number_of_shards" : 2,

    "number_of_replicas" : 0,

    "analysis" : {

      "filter" : {

        "autocomplete_filter": {

          "type" : "edge_ngram",

          "min_gram" : 1,

          "max_gram" : 20


         "english_stop": {

           "type":       "stop",

           "stopwords":  "_english_"


         "english_stemmer": {

           "type":       "stemmer",

           "language":   "english"


         "english_possessive_stemmer": {

           "type":       "stemmer",

           "language":   "possessive_english"


         "light_english_stemmer": {

           "type":       "stemmer",

           "language":   "light_english"



      "analyzer": {

        "autocomplete" : {

          "type" : "custom",

          "tokenizer" : "standard",

          "filter" : [






            "autocomplete_filter" ]



Above command will add a log to elasticsearch which can be tested from terminal to see the result using:

curl ''

And from browser :


Debugging Kibana

If kibana is not coming up on browser  then might you need to restart your apache.

If kibana is displaying empty dashboard then you might have to change the port in config.js if you have changed in elasticsearch.yml

sudo vi /var/www/html/config.js

If ports are fine and still you get empty dashboard then might be you to create new logs. Logstash only displays new logs.

Test whether logstash and elasticsearch is working or not.

If both logstash and elasticsearch are working fine, check elasticsearch.yml config. Make sure the network.host is not localhost.




Logstash-forwarder say cant connect to host

Possible solutions: restart both the boxes or make sure both logstash and logstash forwarder are working on same port.

Logstash stops adding logs after adding some logs

Possible solution: modify the memory limits and other settings in elasticsearch.yml


For Installation you can use the vagrant script, Virtual Box name is logger.

After installing the Logger Box you have to rebuild your d6phx box, because we are going to install logstash-forwarder on it.

For debugging use Debugging ELK


Kibana serves as a frontend to display elastic search. By default Kibana comes with logstash error logging dashboard.

For more information and details use http://www.elasticsearch.org/guide/en/kibana/current/introduction.html

On Vagrant we have used apache to host kibana. Kibana can also be used with ngnix.

Kibana have one settings file which can be edited by using sudo vi /var/www/html/config.js


Elastic Search

For more information and details about elastic search visit http://www.elasticsearch.org/overview/elasticsearch/

Elastic search have settings file in which we configure the memory and other settings. to edit the elastic search file in terminal use : sudo vi /etc/elasticsearch/elasticsearch.yml

During installation logstash was hanging up so we have modified some setting on elasticsearch.yml



Logstash serves as a backend for elastic search. In our system we uses Logstash on the Logger box which interacts with the elastic search to index all the logs and kibana is used to display on the front-end.

Logstash is installed on logger box which is listening on port 5000 from where logstash-forwarder on d6phx communicates with it to send logs.

Logstash have different conf files which can be found in  /etc/logstash/conf.d/

To change the port on logstash for listening to logstash-forwarder edit 01-lumberjack-input.conf file which is in /etc/logstash/conf.d/

01-lumberjack-input.conf is used for the input to logstash

30-lumberjack-output.conf is used for the output to logstash.


Logstash Forwarder 

Logstash forwarder is a plugin which we are using on the d6phx box and can be used on any other box from where we want to capture the log files. Logstash forwarder communicate with logstash on port 5000, if you want to change the port then you have to change it in both logstash conf files and in logstash-forwarder file.

Logstash forwarder has a setting files which holds all the details about changing the port and configuring the log files that we want to index in elastic search.

If you want to view new type of logs in Kibana then you have to change the logstash-forwarder file.

To edit the logstash forwarder file in terminal use :  sudo vi /opt/logstash-forwarder/conf/logstash-forwarder.conf


SSL certificate for both Logstash and Logstash-forwarder

In the Logstash-forwarder conf, we need to setup SSL crt and key. We store them in the directory /opt/logstash-forwarder/ssl/ on d6phx box and /etc/pki/tls/ on logger box. Both boxes can share the same SSL certificate. We need to generate a self-signed SSL certificate from logger box with openssl command. Since we generate the ssl based on IP address of logger box (, we need a custom config for openssl. To do that, we first create a new cnf file with the command:

Logger box:

steps for the logger box SSL set up

sudo touch /etc/ssl/notsecure.cnf

After that, copy and paste the following config to the file:

distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no

C = TG
ST = Togo
L =  Lome
O = Private company
CN = *

subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:TRUE
subjectAltName = @alt_names

DNS.1 = *
DNS.2 = *.*
DNS.3 = *.*.*
DNS.4 = *.*.*.*
DNS.5 = *.*.*.*.*
DNS.6 = *.*.*.*.*.*
DNS.7 = *.*.*.*.*.*.*
IP.1 =
IP.2 =

Next step is to generate ssl certificate with the custom config by the command below. Note you might run it with sudo user. The openssl command uses '-config' arg to override the config file with /etc/ssl/notsecure.cnf. The arg '-nodes' means generating a non-secure certificate. -days configs how long the certificate will last. -keyout will generate a new .key file and -out creates a new certificate file.

cd /etc/pki/tls; openssl req -x509 -config /etc/ssl/notsecure.cnf -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

After both key and crt created, we also need to store them in d6phx box /opt/logstash-forwarder/ssl/.

Logstash-forwarder: d6phx

Steps for adding SSL on the d6phx box

Now we have the SSL certificates working on the logger box, we need to copy them to logstash-forwarder in the d6phx box

go to the logger box and copy the SSL files to /vagrant/software/ (/vagrant/software/ folder is useful for storing files so we can access them from any vagrant box)

cd /etc/pki/tls

sudo cp private/logstash-forwarder.key /vagrant/software/
sudo cp certs/logstash-forwarder.crt /vagrant/software/

go back to d6phx box and check that the files exist in the /vagrant/software folders

cd /vagrant/software


copy the files from /vagrant/software and add them to /opt/logstash-forwarder/ssl/ 

sudo cp logstash-forwarder* /opt/logstash-forwarder/ssl/ 

IMPORTANT — Don't forget to restart both logstash-forwarder and logstash.

To create an alias for the elk server go to your phoenix vagrant

sudo vi /etc/hosts

then add logs.dev

logs.dev is an alias so it can be whatever you like


Logstash-forwarder will sends the logs to logstash on Logger box, which will index all the logs in elasticsearch. Kibana hosted on apache server will display the dashboard for viewing the logs.