compose-elk
compose-elk copied to clipboard
The Elastic Stack powered by Docker and Compose.
What is the Elastic Stack?
By combining the massively popular Elasticsearch, Logstash, and Kibana, Elastic has created an end-to-end stack that delivers actionable insights in real time from almost any type of structured and unstructured data source. Built and supported by the engineers behind each of these open source products, the Elastic Stack makes searching and analyzing data easier than ever before.
Setup
Install Docker
- Docker engine
- Docker compose
- Clone this repository:
git clone https://github.com/khezen/docker-elk
File Descriptors and MMap (Linux Only)
run the following command on your host:
sysctl -w vm.max_map_count=262144
You can set it permanently by modifying vm.max_map_count setting in your /etc/sysctl.conf.
Usage
Start the Elastic Stack using docker-compose:
$ docker-compose up
You can also choose to run it in background:
$ docker-compose up -d
Now that the stack is running, you'll want to inject logs in it. The shipped logstash configuration allows you to send content via tcp or udp:
$ nc localhost 5000 < ./logstash-init.log
And then access Kibana by hitting http://localhost:5601 with a web browser.
WARNING: If you're using boot2docker, or Docker Toolbox you must access it via the boot2docker IP address instead of localhost.
NOTE: You need to inject data into logstash before being able to create a logstash index in Kibana. Then all you should have to do is to hit the create button.
By Default, The Elastic Stack exposes the following ports:
- 5000: Logstash TCP input.
- 9200: Elasticsearch HTTP
- 9300: Elasticsearch TCP transport
- 5601: Kibana
Docker Swarm
Deploy the Elastic Stack on your cluster using docker swarm:
- Connect to a manager node of the swarm
git clone https://github.com/khezen/docker-elkcd docker-elkdocker stack deploy -c swarm-stack.yml elk
The number of replicas for each services can be edited from swarm-stack.yml:
...
deploy:
mode: replicated
replicas: 2
...
Services are load balanced using HAProxy.
Elasticsearch
Configuration file is located in /etc/elasticsearch/elasticsearch.yml.
You can find default config there.
You can find help with elasticsearch configuration there.
You can edit docker-compose.yml to set khezen/elasticsearch environment variables yourself.
elasticsearch:
image: khezen/elasticsearch
environment:
HEAP_SIZE: 1g
ELASTIC_PWD: changeme
KIBANA_PWD: changeme
LOGSTASH_PWD: changeme
BEATS_PWD: changeme
ELASTALERT_PWD: changeme
volumes:
- /data/elasticsearch:/usr/share/elasticsearch/data
- /etc/elasticsearch:/usr/share/elasticsearch/config
ports:
- "9200:9200"
- "9300:9300"
networks:
- elk
restart: unless-stopped
Kibana
-
Discover - explore your data,
-
Visualize - create visualizations of your data,
- You can find exported visualizations under
./visualizationsfolder, - To import them in Kibana, go to
Managment->Saved Objectspanel,
- You can find exported visualizations under
-
Dashboard - displays a collection of saved visualizations,
- You can find exported dashboards under
./dashboardsfolder, - To import them in Kibana, go to
Managment->Saved Objectspanel,
- You can find exported dashboards under
-
Timelion - combine totally independent data sources within a single visualization.
Configuration file is located in /etc/kibana/kibana.yml.
You can find default config there.
You can find help with kibana configuration there.
You can edit docker-compose.yml to set khezen/kibana environment variables yourself.
kibana:
links:
- elasticsearch
image: khezen/kibana
environment:
KIBANA_PWD: changeme
ELASTICSEARCH_HOST: elasticsearch
ELASTICSEARCH_PORT: 9200
volumes:
- /etc/kibana:/etc/kibana
- /etc/elasticsearch/searchguard/ssl:/etc/searchguard/ssl
ports:
- "5601:5601"
networks:
- elk
restart: unless-stopped
logstash
Configuration file is located in /etc/logstash/logstash.conf.
You can find default config there.
NOTE: It is possible to use environment variables in logstash.conf.
You can find help with logstash configuration there.
You can edit docker-compose.yml to set khezen/logstash environment variables yourself.
logstash:
links:
- elasticsearch
image: khezen/logstash
environment:
HEAP_SIZE: 1g
LOGSTASH_PWD: changeme
ELASTICSEARCH_HOST: elasticsearch
ELASTICSEARCH_PORT: 9200
volumes:
- /etc/logstash:/etc/logstash/conf.d
- /etc/elasticsearch/searchguard/ssl:/etc/elasticsearch/searchguard/ssl
ports:
- "5000:5000"
- "5001:5001"
networks:
- elk
restart: unless-stopped
Beats
The Beats are open source data shippers that you install as agents on your servers to send different types of operational data to Elasticsearch
any beat
You need to provide elasticsearch host:port and credentials for beats user in the configuration file:
output.elasticsearch:
hosts: ["<ELASTICSEARCH_HOST>:<ELASTICSEARCH_PORT>"]
index: "packetbeat"
user: beats
password: <BEATS_PWD>
metricbeat
You can find help with metricbeat installation here.
Configuration file is located in /etc/metricbeat/metricbeat.yml.
You can find help with metricbeat configuration here.
start with sudo /etc/init.d/metricbeat start
filebeat
You can find help with filebeat installation here.
Configuration file is located in /etc/filebeat/filebeat.yml.
You can find help with filebeat configuration here.
start with sudo /etc/init.d/filebeat start
packetbeat
You can find help with packetbeat installation here.
Configuration file is located in /etc/packetbeat/packetbeat.yml.
You can find help with packetbeat configuration here.
start with sudo /etc/init.d/packetbeat start
Elastalert
What is Elastalert?
ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch. It is a nice replacement of the Watcher module if your are not willing to pay the x-pack subscription and still needs some alerting features.
Configuration
Configuration file is located in /etc/elastalert/elastalert.yml.
You can find help with elastalert configuration here.
You can share rules from host to the container by adding them to /usr/share/elastalert/rules
User Feedback
Issues
If you have any problems with or questions about this project, please ask for help through a GitHub issue.