logspout-logstash icon indicating copy to clipboard operation
logspout-logstash copied to clipboard

No logging if ELK stack is not fully up

Open bfleming-ciena opened this issue 7 years ago • 10 comments

I was using a dockerhub image that is a self-contained ELK stack. I was trying to run the ELK stack and Logspout-Logstash from a single docker-compose.yml.

Seems that if the ELK stack is not FULLY up, Logspout-Logstash never starts logging. Restarting the logspout container does correct it. So, just noting that here, it would be nice if it would do re-tries or something. Unless perhaps it's an issue with my setup.

Here is my docker-compose file

elk:
  image: sebp/elk
  ports:
   - "5601:5601"
   - "9200:9200"
   - "5044:5044"
   - "5000:5000"
   - "5000:5000/udp"
  environment:
   - ES_HEAP_SIZE=12g
   - LS_HEAP_SIZE=12g
  volumes:
    - $HOME/dev/elk/logspout.conf:/etc/logstash/conf.d/logspout.conf
logspout:
  image: local/logspout
  container_name: logspout
  environment:
   - LOGSPOUT=ignore
   - ROUTE_URIS=logstash+tcp://<HOSTIP>:5000
  volumes:
   - /var/run/docker.sock:/tmp/docker.sock

bfleming-ciena avatar Jan 30 '17 19:01 bfleming-ciena

Noticed the same thing (technically while running the dockerized version of this). Re-created a logstash container and had to restart this logspout container in order to make it work.

I'm also talking from logspout to logstash over UDP so I would expect those requests to just kind of fail if logstash wasn't running, then start working when logstash came back online. That is not the behavior I'm seeing though.

mattdodge avatar Feb 23 '17 18:02 mattdodge

I would happily merge a PR if anyone wants to submit a fix for this.

maxekman avatar Mar 08 '17 08:03 maxekman

I'm working on a change that lets logspout-logstash retry when starting (connecting for the first time) and when sending data.

Looks ok when the logspout-logstash is started and logstash isn't ready; it just waits and then starts logging.

The part I'm not sure about is retrying when sending - it recovers when using UDP and the target does not change. It doesn't recover when using TCP since the connection is broken, and it won't survive if the address of the log target changes. So perhaps I shouldn't try that, just let logspout die and have docker restart the container. Unfortunately if I understand the way logspout plugins work correctly, forcing a reconnect in that situation isn't possible.

In any case, log lines would be lost.

Any opinions or suggestions about that?

iljaweis avatar Mar 09 '17 14:03 iljaweis

For completeness, my log:

logstash: could not write:write udp 172.28.0.4:54914->172.28.0.3:5001: write: connection refused

luckydonald avatar Apr 15 '17 00:04 luckydonald

@iljaweis: I think exiting the container would be a acceptable workaround until some real solution is found. Some data loss is better than having it hang for a day or two unnoticed.

Also maybe it needs to refresh the DNS addresses somehow?

luckydonald avatar Apr 15 '17 00:04 luckydonald

@luckydonald that's what I did in #48 which has been merged.

iljaweis avatar Apr 18 '17 11:04 iljaweis

Could #51 help with this you think?

maxekman avatar Jul 12 '17 10:07 maxekman

Seems like no ones looked into this for awhile. I'm still seeing the issue when the log target's IP changes. Is there any update or are you still looking for a PR?

matutter avatar Jul 31 '20 20:07 matutter

I’m not using the lib right now so a PR would be welcome!

maxekman avatar Aug 03 '20 14:08 maxekman

Well I can replicate the problem easily but I can't pin-point the problem exactly. Its definitely a Logspout proper issue and not an issue with this plugin. Building logspout from the root of it's repository instead of the custom/ folder produces a version of logspout that doesn't have the issues the OP describes nor does it have my problems when the IP changes.

matutter avatar Aug 04 '20 21:08 matutter