influxdb icon indicating copy to clipboard operation
influxdb copied to clipboard

[influxdb1.x/2.x]: purger code may prevent compactions from finishing

Open philjb opened this issue 9 months ago • 4 comments

Copied from this comment:

  • https://github.com/influxdata/influxdb/pull/26089#issuecomment-2701989707

Requires investigation to confirm the issues noted below.

  • purger uses a go map, which doesn't release backing memory ever - so it'll be as large as the largest set of compaction input files (which might be a lot for a full compaction). Minor issue, but could remedy by allocating a new map when len(p.files) == 0.
  • The purger seems to suffer from a similar issue that @gwossum identified with the retention service. It calls tsmfile.InUse() and then later tsmfile.Close() without blocking new readers in the inbetween - a time of check, time of use flaw. Therefore tsmfile.Close() which waits on readers to finish (Unref()) has an unbounded runtime. The purger's lock is held over the close() call, which means the lock is held for an unbounded time, which you might say is ok because this happens in the purger's "purge" goroutine, but the purger.Add(files[]) call needs to get that lock too and purger.Add() is called synchronously in Filestore.replace() which means replace() has an unbounded runtime. All this means, if i'm reading it right, that purging tsm files could freeze up compaction.
    • One could make it go f.purger.add(inuse) to decouple the purger and compaction which is nice, but it'd be better to use a sync map and/or reduce the purger lock holding time and/or put tsm file closing into a goroutine and/or call SetNewReadersBlocked before the InUse check, like the retention service does.

philjb avatar Mar 07 '25 17:03 philjb

Could the result of such an issue result in something like the issue logged here?

https://github.com/influxdata/influxdb/issues/25296

kjetilmjos avatar Mar 10 '25 14:03 kjetilmjos

Could the result of such an issue result in something like the issue logged here?

#25296

Not entirely sure without further exploration - do you by chance have any profiles that I can run pprof on to get a better idea of the stack traces at that time? It may add in the exploration to have some more reproducer profiles.

devanbenz avatar Mar 10 '25 14:03 devanbenz

Unfortunately not. What would be the best way of getting the profiles?

kjetilmjos avatar Mar 10 '25 14:03 kjetilmjos

I have gathered some of the profile traces, but they are not traces for when the problem occurred. The problem occurred on this particular server today at 00:00. Please have a look to see if you can find anything related. I will setup a script to capture this same output next time it crashes.

cpu-profile.pb.gz goroutine-dump.txt heap-profile.pb.gz trace.pb.gz

Cripyy avatar Mar 10 '25 15:03 Cripyy