Zen Networks –
ELK Stack: A Tutorial to Install Elasticsearch and Kibana on Docker
The Elastic Stack, formerly known as ELK Stack, is always mentioned when it comes to log management and log management solutions. But what exactly is the Elastic Stack, and why do so many people prefer it to any other log management platform?
What is ELK Stack?
Elasticsearch, Logstash, and Kibana are three open-source software tools that, when combined, form a comprehensive solution for gathering, organizing, and analyzing log data from on-premises or cloud-based IT settings.
Elasticsearch
The ELK or Elasticsearch stack is built on Elasticsearch, a prominent full-text search engine. It is a free and open-source search, and analytics engine first launched in 2010 and built on the Apache Lucene library. DevOps teams can utilize Elasticsearch to index, query, and analyze log data from different sources within complicated IT systems.
Logstash
Logstash is a free and open-source log aggregator and processor that operates by reading data from several sources and transferring it to one or more storage or stashing destinations. Logstash is a server-side data processing pipeline that can ingest logs from various data sources, parse and convert the log data, and then deliver it to an Elasticsearch cluster for indexing and analysis. Logstash comes with ready-to-use inputs, filters, codecs, and outputs to let you extract useful information from your logs.
Kibana
Kibana is a free, open-source Elasticsearch and Logstash analysis and visualization layer. Users can use Kibana to investigate aggregated log data stored in Elasticsearch indices. It also makes searching, analyzing, and visualizing massive amounts of data and detecting trends and patterns simple.
Why Should You Use ELK for Log Analytics and Management?
The ELK Stack has grown in popularity as a business log management tool. The following are some of the reasons why so many DevOps teams use the ELK stack for logs:
Logs are critical –For software-dependent enterprises, log analytics gives crucial visibility into IT assets and infrastructure, addressing use cases such as cloud service monitoring, DevOps application troubleshooting, and security analytics. ELK gives these enterprises the tools to monitor increasingly complex IT infrastructures.
Open-Source Alternative – Open-source software includes Elasticsearch, Kibana, and Logstash. That means they’re available for free download, and users can create plug-ins and extensions and modify the source code. It’s simple for enterprises to start using the ELK stack for log analytics because there is no software licensing costs.
Use Cases That Have Worked – Some of the world’s largest and most well-known IT organizations have employed the ELK stack for log management, Such as LinkedIn and Netflix
How to Install Elasticsearch And Kibana with docker
To begin utilizing Elasticsearch for log, you’ll need to install it, configure it
Pulling the Image:
It’s as simple as running a docker pull command against the Elastic Docker registry to get Elasticsearch for Docker.
docker pull docker.elastic.co/elasticsearch/elasticsearch:8.2.0
Now you can start a single-node or multi -node cluster
Docker can be used to create a single-node cluster.
When you start a single-node Elasticsearch cluster in a Docker container, security is enabled and configured for you automatically. When you first start Elasticsearch, the following security settings is applied automatically:
- For the transport and HTTP layers, certificates and keys are created.
- yml contains the Transport Layer Security (TLS) configuration settings.
- For the elastic user, a password is generated.
- For Kibana, an enrollment token is produced.
After that, start Kibana and enter the enrollment token.
The commands below create a single-node Elasticsearch cluster for development
1- Create a new docker network
docker network create elastic
2- In Docker, run Elasticsearch. A password for the elastic user is generated and sent to the terminal, along with an enrollment token for Kibana.
docker run --name es01 --net elastic -p 9200:9200 -p 9300:9300 -it docker.elastic.co/elasticsearch/elasticsearch:8.2.0
3- Copy your Docker container’s crt security certificate to your local machine.
docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt
4- Open a new terminal and use the crt file from your Docker container to test that you can connect to your Elasticsearch cluster by making an authenticated call. When prompted, enter the elastic user’s password.
curl --cacert http_ca.crt -u elastic https://localhost:9200
Start a multi node cluster with docker compose
You may use Docker Compose to set up a multi-node Elasticsearch cluster and Kibana in Docker with security enabled.
Prepare the environment
The docker-compose.yml configuration file uses environment variables set in the.env file. Use the ELASTIC PASSWORD and KIBANA PASSWORD variables to set a strong password for the elastic and kibana system users. The docker-compose.yml file references these variables.
# Password for the 'elastic' user (at least 6 characters)ELASTIC_PASSWORD=
# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=
# Version of Elastic products
STACK_VERSION=8.2.0
# Set the cluster name
CLUSTER_NAME=docker-cluster
# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial
# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9200
# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80
# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824
# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject
docker-compose.yml
This docker-compose.yml file generates a three-node secure Elasticsearch cluster with authentication and network encryption enabled, as well as a Kibana instance.
version: "3.7"services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0
container_name: elasticsearch
restart: always
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data-volume:/usr/share/elasticsearch/data
ports:
- "9200:9200"
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.12.0
restart: always
environment:
SERVER_NAME: kibana
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
ports:
- "5601:5601"
depends_on:
- elasticsearchvolumes:
elasticsearch-data-volume:
driver: local
To start the cluster, run docker-compose:
docker-compose up
Log messages are sent to the console and handled by the Docker logging driver that has been setup. docker logs is the default way to access logs.
docker-compose down will bring the cluster to a halt. When you restart the cluster with docker-compose up, the data in the Docker volumes is maintained and loaded. When bringing down the cluster, use the -v option to destroy the data volumes: docker-compose down -v.
In-production use of Docker images
When operating Elasticsearch in Docker in production, the following prerequisites and suggestions apply.
Set vm.max_map_count to at least 262144
For production use, the vm.max map count kernel option must be set to at least 262144.
Your platform determines how you set vm.max map count.
Linux:
The vm.max_map_count parameter should be set permanently in /etc/sysctl.conf:
grep vm.max_map_count /etc/sysctl.confvm.max_map_count=262144
Run the following command to apply the setting to a live system:
sysctl -w vm.max_map_count=262144
macOS with Docker for Mac:
In the xhyve virtual machine, the vm.max map count setting must be set:
- 1- From the command line, run:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
- To configure vm.max map count, press enter and use ‘sysctl’.
sysctl -w vm.max_map_count=262144
- To exit the screen session, type Ctrl a d.
Windows and macOS with Docker Desktop
Docker-machine must be used to set the vm.max map count setting:
docker-machine sshsudo sysctl -w vm.max_map_count=262144
Windows with Docker Desktop WSL 2 backend
The vm.max map count setting in the docker-desktop container must be set:
wsl -d docker-desktopsysctl -w vm.max_map_count=262144
The elasticsearch user must be able to read the configuration files.
Elasticsearch operates as user elasticsearch with uid:gid 1000:0 by default inside the container.
The elasticsearch user must be able to read any local directory or file that you bind-mount. This user must also have write access to the data and log directories. Giving group access to gid 0 for the local directory is a solid strategy.
To prepare a local directory for data storage with bind-mount, for example:
mkdir esdatadirchmod g+rwx esdatadir
chgrp 0 esdatadir
The final step is to view the final results.
Check if it’s working at http://localhost:5601/ (the Kibana localhost server) and http://localhost:9200/ (the Elasticsearch host server).