pino-elasticsearch icon indicating copy to clipboard operation
pino-elasticsearch copied to clipboard

writing to elastic isnt working

Open eran10 opened this issue 5 years ago • 8 comments
trafficstars

Hi, i would like to send my pino logs to elastic, we are using elastic 6 version with latest version of pino and pino-elastic and when i config the below for pino :

let pino = require('pino'),
    pinoElastic = require('pino-elasticsearch');

const streamToElastic = pinoElastic({
    index: `log-test-%{DATE}`,
    type: 'info',
    consistency: 'one',
    node: 'http://myuser:mypass@localhost:9200',
    'trace-level': 'info',
    'es-version': 6,
    'bulk-size': 1,
    ecs: true
});

and then

const logger = pino(pinoOptions, streamToElastic);
logger.info('test');

the app is running ok, but no logs is printed to console and no logs sent to elastic with no errors at all, am i missing something?

eran10 avatar Jan 13 '20 13:01 eran10

You should use https://github.com/pinojs/pino-multi-stream to print to stdout as well.

As for the reason logs are not popping up in Elastic... I don't know. cc @delvedor

mcollina avatar Jan 13 '20 15:01 mcollina

thanks i will check

eran10 avatar Jan 20 '20 14:01 eran10

@eran10, you had any development regarding this issue?

DavidPVaz avatar May 20 '20 09:05 DavidPVaz

I can share an update, we have worked for improving the ECS support, and if you are using Pio v6 now you can use @elastic/ecs-pino-format instead of enabling the ecs options here. @mcollina we should probably deprecate it :)

delvedor avatar May 20 '20 15:05 delvedor

I found if I write more logs, it sends some of the logs. So I think it's might be cause of the flushBytes.

If I set flushBytes to 10, all logs appears.

This might be an issue that the last few logs can not be flushed before node.js app terminated.

JoHuang avatar Jun 08 '20 12:06 JoHuang

I found if I write more logs, it sends some of the logs. So I think it's might be cause of the flushBytes. If I set flushBytes to 10, all logs appears.

This is the correct behavior :) By default we collect 5 MB of logs before sending them, to avoid overloading Elasticsearch. You can easily change that limit with the --flush-bytes option.

This might be an issue that the last few logs can not be flushed before node.js app terminated.

It should not, as soon as the process end, the bulk indexer does a final flush.

Anyhow in the next version of the bulk indexer, there will be a flush timeout option as well :)

delvedor avatar Jun 08 '20 16:06 delvedor

This might be an issue that the last few logs can not be flushed before node.js app terminated.

It should not, as soon as the process end, the bulk indexer does a final flush.

I didn't see this behavior. How to trigger it? Or could you indicate the code related to this behavior? Thanks

JoHuang avatar Jun 08 '20 17:06 JoHuang

If you run the main process and pipe this transport, it works automatically, as the stream from the main process and the transport ends.

node example.js | ./cli.js

If you pass this library directly to the pino options, and then kill the process, there is no guarantee that all the logs will be sent, as the process will be destroyed. As I was saying the next version will support a flush interval. We can also think about a force flush method.

delvedor avatar Jun 09 '20 09:06 delvedor