elkx-docker icon indicating copy to clipboard operation
elkx-docker copied to clipboard

Kibana doesn't show up

Open jffz opened this issue 8 years ago • 3 comments

Hi, I played with your ELK image for somme days before decided to switch to this one to add a security layer to my diggings.

After some tests, Kibana doesn't start and it's log file is empty.

elk:
  image: sebp/elkx
  ports:
    - "5601:5601"
    - "9200:9200"
    - "5044:5044"
  environment:
    TZ: 'Europe/Paris'
  volumes:
    - ./etc/kibana.yml:/opt/kibana/config/kibana.yml
    - ./etc/logstash/40-srcds.conf:/etc/logstash/conf.d/40-srcds.conf
    - ./patterns/srcds:/opt/logstash/patterns/srcds
    - ./data:/var/lib/elasticsearch
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp6       0      0 :::9200                 :::*                    LISTEN      -
tcp6       0      0 :::5044                 :::*                    LISTEN      -
tcp6       0      0 :::9300                 :::*                    LISTEN      -
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      -
 docker exec -it elkx_elk_1 ls -la /var/log/kibana/kibana5.log
-rw-r--r-- 1 kibana kibana 0 Apr 12 14:21 /var/log/kibana/kibana5.log
docker logs elkx_elk_1
 * Starting periodic command scheduler cron
   ...done.
 * Starting Elasticsearch Server
   ...done.
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
waiting for Elasticsearch to be up (3/30)
waiting for Elasticsearch to be up (4/30)
waiting for Elasticsearch to be up (5/30)
waiting for Elasticsearch to be up (6/30)
waiting for Elasticsearch to be up (7/30)
waiting for Elasticsearch to be up (8/30)
waiting for Elasticsearch to be up (9/30)
Waiting for Elasticsearch cluster to respond (1/30)
logstash started.
 * Starting Kibana5
   ...done.
==> /var/log/elasticsearch/elasticsearch.log <==
[2017-04-12T14:21:10,088][INFO ][o.e.n.Node               ] [WNRkAuz] starting ...
[2017-04-12T14:21:10,266][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: 54:80:85:7a:92:8e:9d:5a
[2017-04-12T14:21:10,345][INFO ][o.e.t.TransportService   ] [WNRkAuz] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
[2017-04-12T14:21:10,356][INFO ][o.e.b.BootstrapChecks    ] [WNRkAuz] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-04-12T14:21:13,510][INFO ][o.e.c.s.ClusterService   ] [WNRkAuz] new_master {WNRkAuz}{WNRkAuzoR3-Zi3BB2ZvrFw}{JFGuHAGtSMeNtpuy8P2T6w}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-04-12T14:21:13,551][INFO ][o.e.h.HttpServer         ] [WNRkAuz] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}
[2017-04-12T14:21:13,551][INFO ][o.e.n.Node               ] [WNRkAuz] started
[2017-04-12T14:21:14,157][INFO ][o.e.l.LicenseService     ] [WNRkAuz] license [3407e1bb-2b66-4638-9d2b-90bf132b9bb8] mode [trial] - valid
[2017-04-12T14:21:14,178][INFO ][o.e.g.GatewayService     ] [WNRkAuz] recovered [3] indices into cluster_state
[2017-04-12T14:21:14,944][INFO ][o.e.c.r.a.AllocationService] [WNRkAuz] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.monitoring-es-2-2017.04.12][0], [.monitoring-data-2][0]] ...]).

==> /var/log/logstash/logstash-plain.log <==

==> /var/log/kibana/kibana5.log <==

==> /var/log/logstash/logstash-plain.log <==
[2017-04-12T14:21:33,987][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
[2017-04-12T14:21:34,013][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"f5184e09-b481-48d9-a3d4-48c695f0078d", :path=>"/opt/logstash/data/uuid"}
[2017-04-12T14:21:35,161][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9200/]}}
[2017-04-12T14:21:35,164][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx@localhost:9200/, :path=>"/"}
[2017-04-12T14:21:35,367][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x4e948282 URL:http://elastic:xxxxxx@localhost:9200/>}
[2017-04-12T14:21:35,375][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x7990f6ba URL://localhost>]}
[2017-04-12T14:21:35,701][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-04-12T14:21:36,134][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-04-12T14:21:36,190][INFO ][logstash.pipeline        ] Pipeline main started
[2017-04-12T14:21:36,255][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

jffz avatar Apr 13 '17 15:04 jffz

Strange… it seems that Kibana didn't actually start.

Provided you left enough time for Kibana to start, two quick tests:

  • Do you have enough memory for Kibana to start? (i.e. more than 3GB, preferably more than 4GB)
  • Could you try to run the vanilla version of the image, without overriding the configuration files? (If your kibana.yml is invalid, perhaps Kibana won't start.)

spujadas avatar Apr 13 '17 16:04 spujadas

Kibana start with 'vanilla' image. elk image also start with my settings and i have enough memory on the system.

jffz avatar Apr 13 '17 20:04 jffz

Sounds like you're running the container in the right conditions and that there's no issue with the image per se, so it's probably a configuration error: I'd suggest that you ask for guidance over at https://discuss.elastic.co/c/kibana, they will most certainly be able to help you.

spujadas avatar Apr 13 '17 22:04 spujadas