Niksa Jakovljevic

Results 35 comments of Niksa Jakovljevic

The local benchmarks that I've added are pointing that batching should provide better performance, however I still need to run full blown benchmark with more realistic data loads.

> Would be good to see perf difference with and without batching in case of Jaeger ingestion. Yes I will be soon publishing benchmark numbers in this PR.

@arajkumar Thanks for sharing this. Interesting finding. I would expect that at some point increasing batch size has no effect on ingest performance. It would be good to check what's...

Note that we also do batching in our ingestion pipeline so maybe we are hitting some un-optimal code path there..

Btw our internal batch size is 2K so that might be a clue on why it performs best with 2K ingest samples.

Ok. I've run some benchmarks locally and figured out that you were getting weird results due to small amount of data (only 10K samples). I've tried with 100K samples and...

Btw the reason for bigger batches being slower in your runs is due to how copier works. There is one channel shared by all copiers and copiers are getting ingest...

I believe users can't tweak this so this requires code change. First thing that comes to my mind is removing mutex when getting batches - this should allow other copiers...

I believe that compatibility between PG, TSDB and Promscale Extension should be solved inside Promscale Extension. Connector then should define compatibilty with Extension version only. This way we separate concerns....

I've just noticed `Failing after 136m — benchmark`. Not sure if it's related to this change or if it's some of the existing benchmarks.