python-logstash-async
python-logstash-async copied to clipboard
Python logging handler for sending log events asynchronously to Logstash.
When using LogstashFormatter the specific log record "extra" is missing from the formatted message ```python from loguru import logger from logstash_async.handler import AsynchronousLogstashHandler from logstash_async.formatter import LogstashFormatter logstash_handler = AsynchronousLogstashHandler(...
In my hunt for a bottleneck in one of our applications, I've found another few hotspots. Not that it was _slow_ before, but when you push a few thousand messages...
Hello. My team recently faced a problem with `python-logstash-async`. When logstash server went into maintenance, the memory usage of one of our services started to rapidly grow. Turns out this...
```python import logging import sys from logstash_async.handler import AsynchronousLogstashHandler host = 'localhost' test_logger1 = logging.getLogger('python-logstash-logger') test_logger1.setLevel(logging.INFO) test_logger1.addHandler(AsynchronousLogstashHandler( host, 5002, database_path=None, transport='logstash_async.transport.UdpTransport' )) test_logger1.info('logger1') test_logger2 = logging.getLogger('python-logstash-logger2') test_logger2.setLevel(logging.INFO) test_logger2.addHandler(AsynchronousLogstashHandler( host, 5000,...
With Elasticsearch 7.0 [elastic common schema](https://www.elastic.co/blog/introducing-the-elastic-common-schema) (ECS) was introduced. This maps the hostname to `host.name` instead of `host`. Currently `logstash_async` fails with: > [2019-04-29T08:04:22,562][WARN ][logstash.outputs.elasticsearch] Could not index event to...
```python LOGGING = { "version": 1, "disable_existing_loggers": False, "formatters": { "logstash": { "()": "logstash_async.formatter.DjangoLogstashFormatter", "message_type": "django-logstash", "fqdn": False, "extra_prefix": None, "extra": { "application": "my-app", "project_path": os.getcwd(), "environment": "test", }, },...
Support was added for specifying callables as extra fields. This is super useful if we're trying to attach span_id and trace_id to a log entry in order to cross-match it...
I'm using python-logstash-async in a very particular context: - High volume of logs every 2 seconds (~90 events) - Hazardous/bad network (cellular, wireless, ...) - Small/low resources (embeded systems) First...
I think the batch_size should not be equal to 10 but should be equal to constants.QUEUED_EVENTS_BATCH_SIZE. I.E if constants.QUEUED_EVENTS_BATCH_SIZE = 1500, it will generate 1500/10=150 transactions and that will take...
` During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/logstash_async/worker.py", line 241, in _flush_queued_events self._send_events(events) File "/usr/local/lib/python3.10/dist-packages/logstash_async/worker.py", line 304, in _send_events self._transport.send(events, use_logging=use_logging)...