bunyan-logstash-tcp icon indicating copy to clipboard operation
bunyan-logstash-tcp copied to clipboard

Log stream memory heap

Open shaikatzir opened this issue 9 years ago • 8 comments

Sometimes my logstash server is crashing, preventing from the bunyan stream to send logs for few hours. It seems that every time it happens, the internal memory usage fills up quickly. I guessed it is caused by the bunyan, keeping the logs queued up before sending it to the server, but couldn't find any documentation of it. Any way to configure or check the size of he queue log?

shaikatzir avatar May 23 '15 12:05 shaikatzir

I'm also running into this issue. I noticed that npm install complains about the npm version, as we upgraded to npm 2.x.

libreninja avatar Jun 08 '15 19:06 libreninja

does the bunyan client crashes as well? You could try to increase the buffer size: https://github.com/chris-rock/bunyan-logstash-tcp/blob/master/lib/logstash.js#L54

chris-rock avatar Jun 09 '15 09:06 chris-rock

@libreninja I relaxed the npm dependency in the latest master release

chris-rock avatar Jun 09 '15 09:06 chris-rock

I am not sure if the bunyan client crashes. The question is why the memory keeps on growing without any limit? Is the buffer cyclic? does it deletes old messages?

shaikatzir avatar Jun 09 '15 11:06 shaikatzir

we use a fixed cyclic buffer. therefore the memory should not increase due to the use of the buffer. see https://github.com/trevnorris/cbuffer

chris-rock avatar Jun 09 '15 11:06 chris-rock

Hi, I recently experienced very slow responses from node because something was cluttering eventloop or connection. So i disabled bunyan-logstash-tcp and now it is ok. I think that the problem could be: if tcp stream does not connects to the logstash instance (eg: logstash server down) it somehow occupy node resources and ordinary request to node takes longer than 60sec (proxy timeout per request)

freddi301 avatar Dec 09 '16 19:12 freddi301

Hi, sorry for mistake, it was an unrelated issue. (mongo connection pool duplication)

freddi301 avatar Feb 02 '17 16:02 freddi301

We had the same issue. At 16 o'clock the elastic crashed and our clustered application wasnt available anymore.

bereich

At least, we should be able to catch that dropped logs and write them into a file or something

konstantinkrassmann avatar Feb 22 '17 08:02 konstantinkrassmann