promscale
promscale copied to clipboard
Add batching for traces
Batching will help to achieve better ingest performance, especially if traces are sent one by one.
A batch is produced when one of two conditions is met: batch size or timeout.
This PR also adds async
support for traces meaning that client doesn't need to wait for DB write. This increases ingest performance with a small risk of data loss.
The local benchmarks that I've added are pointing that batching should provide better performance, however I still need to run full blown benchmark with more realistic data loads.
Would be good to see perf difference with and without batching in case of Jaeger ingestion.
Yes I will be soon publishing benchmark numbers in this PR.