loki icon indicating copy to clipboard operation
loki copied to clipboard

compactor: prefetch existing compacted user index files in parallel

Open afayngelerindbx opened this issue 2 years ago • 5 comments

What this PR does / why we need it: Before this change, these files are fetched while holding a table global lock, serializing fetches.

I chose the approach of prefetching index files over attempting to change the locking pattern to allow parallel, on-demand fetches because the complexity of parallelizing on-demand fetches outweighed the benefits of avoiding extra fetches. More specifically, this patch trades off fetching idle, non-compactable index files in favor of avoiding blocking during compactions. The extra fetches are parallelized and so should incur a negligible wall clock penalty even for large numbers of tenants(e.g. 1k idle tenants at 50 threads and 500ms to fetch is ~10s).

Which issue(s) this PR fixes: This should help with #6775, without completely resolving it.

Checklist This change does not introduce any settings, metrics etc. It should be transparent from a user perspective. Therefore, I don't believe it merits documentation/changelog updates.

  • [x] Tests updated

afayngelerindbx avatar Jul 29 '22 13:07 afayngelerindbx

Ran this patch in our prod environment. Here are some observations:

Before prefetching indexes we were seeing compactions take 8-10m. Empty compactions were running in single digit seconds. With the prefetch, we are running in <2m, with empty compactions taking <50s. We have ~600 tenants currently. So my estimate was definitely optimistic but not that optimistic.

image

afayngelerindbx avatar Jul 29 '22 20:07 afayngelerindbx

./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki

Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell.

+           ingester	0%
+        distributor	0%
+            querier	0%
+ querier/queryrange	0%
+               iter	0%
+            storage	0%
+           chunkenc	0%
+              logql	0%
+               loki	0%

grafanabot avatar Jul 29 '22 20:07 grafanabot

Thanks for submitting a PR! This looks interesting. @sandeepsukhani What do you think?

MasslessParticle avatar Jul 29 '22 22:07 MasslessParticle

Hi @sandeepsukhani! Following up on our slack thread, I've changed the patch to parallelize fetching only the tenants that appear in the uncompacted indexes. The reason I chose this approach to fetching seed indexes on demand is that I did not want to blob multiple goroutines on fetching the same index(e.g. if streams are uniformly distributed each ingester will produce a bucket with each active tenant). The process much more closely resembles the TSDB compaction now.

I am planning to run this internally to make sure there aren't obvious issues. However, I wanted to get some early feedback here as well.

afayngelerindbx avatar Aug 05 '22 22:08 afayngelerindbx

Initial internal numbers have been fairly successful with this patch. Compactions went from ~5-6m to ~2-3m. Since I posted the previous graph, we added a good number of streams to our deployment, slowing down compactions somewhat.

afayngelerindbx avatar Aug 08 '22 16:08 afayngelerindbx

./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki

Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell.

+           ingester	0%
+        distributor	0%
+            querier	0%
+ querier/queryrange	0%
+               iter	0%
+            storage	0%
+           chunkenc	0%
+              logql	0.4%
+               loki	0%

grafanabot avatar Aug 22 '22 16:08 grafanabot

Hi @sandeepsukhani, this diff has been running in prod for us without issues for ~2 weeks. The perf gains were sustained. Please, let me know whether you have any feedback on this change. I'd love to get it merged to avoid having to run a forked compactor longer than we have to.

afayngelerindbx avatar Aug 22 '22 17:08 afayngelerindbx

./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki

Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell.

+           ingester	0%
+        distributor	0%
+            querier	0%
+ querier/queryrange	0%
+               iter	0%
+            storage	0%
+           chunkenc	0%
+              logql	0%
+               loki	0%

grafanabot avatar Aug 27 '22 20:08 grafanabot

./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki

Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell.

+           ingester	0%
+        distributor	0%
+            querier	0%
+ querier/queryrange	0%
+               iter	0%
+            storage	0%
+           chunkenc	0%
+              logql	0%
+               loki	0%

grafanabot avatar Aug 30 '22 10:08 grafanabot

./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki

Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell.

+           ingester	0.1%
+        distributor	0%
+            querier	0%
+ querier/queryrange	0%
+               iter	0%
+            storage	0%
+           chunkenc	0%
+              logql	0%
+               loki	0%

grafanabot avatar Aug 30 '22 12:08 grafanabot

@afayngelerindbx I am not sure why the release pipeline is failing. Can you please try rebasing the PR on top of the latest main so that I can get this merged?

sandeepsukhani avatar Aug 30 '22 12:08 sandeepsukhani

./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki

Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell.

+           ingester	0%
+        distributor	0%
+            querier	0%
+ querier/queryrange	0%
+               iter	0%
+            storage	0%
+           chunkenc	0%
+              logql	0%
+               loki	0%

grafanabot avatar Aug 30 '22 12:08 grafanabot

Release pipeline passed after rebase but not the loki-mixin-check not sure what to make of that.

afayngelerindbx avatar Aug 30 '22 12:08 afayngelerindbx

./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki

Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell.

-           ingester	-0.1%
+        distributor	0%
+            querier	0%
+ querier/queryrange	0%
+               iter	0%
+            storage	0%
+           chunkenc	0%
+              logql	0%
+               loki	0%

grafanabot avatar Aug 30 '22 13:08 grafanabot

Release pipeline passed after rebase but not the loki-mixin-check not sure what to make of that.

I restarted it, and it worked. It is flaky and needs to be fixed.

sandeepsukhani avatar Aug 30 '22 13:08 sandeepsukhani