log4stash
log4stash copied to clipboard
How do i connect it to my elk?
Hey, first of all i wanted to say thank you for the project. I don't know how to connect it to my elk, do i need to create an input in logstash and send the log from the appender to port 5044? please send me some instructions
Hi, Actually you don't need logstash when using the appender. The appender is writing directly to your Elasticsearch service.
You will have to configure the server and port of your Elastic service.
If you using aws you will have to add <AuthenticationMethod>
Example:
<!--Only one credential type can used at once-->
<!--Here we list all possible types-->
<AuthenticationMethod>
<!--For basic authentication purposes-->
<Basic>
<Username>Username</Username>
<Password>Password</Password>
</Basic>
<!--For AWS ElasticSearch service-->
<Aws>
<Aws4SignerSecretKey>Secret</Aws4SignerSecretKey>
<Aws4SignerAccessKey>AccessKey</Aws4SignerAccessKey>
<Aws4SignerRegion>Region</Aws4SignerRegion>
</Aws>
</AuthenticationMethod>
Full config available at the README.md
Hello @urielha and thanks for your reply! ( also on behalf of jacob, as we're co-workers :) ) we got it working! I got another question if I may, how can we send only logs who's log level is ERROR ? I couldn't realize it from the readme.
Edit: nevermind, I figured out that its the same as regular log4net (threshold).
One last question however, what happens if for example ElasticSearch crashes and we try to write logs to it? are these logs lost? would it wait until it goes back online? would it be consuming more CPU / RAM than usual?
Hi sorry for the late answer,
Unfortunately in that case the logs will be lost.
From what I remember it can also consume more RAM (regarding CPU, I don't think so, but if you don't use the async option it can make your app slower since each request is blocking until timeout).
You can control that a little by changing the timeout to lower number and playing with the bulk size.
At last, I added new parameter called DropEventsOverBulkLimit it basically throwing new log messages if the bulk is full (normally a new bulk object will be opened and this is what induced extra RAM usage).
The DropEventsOverBulkLimit is a bit tricky, I can assist you more with it if needed.