vmstorage: improve cache usage for retention and downsampling filters
Is your feature request related to a problem? Please describe
Currently, a background merge with retentionFilters or downsampling filters enabled can significantly increase memory usage and lead OOM issues.
Both retention and downsampling filters need are using IndexSearch.searchMetricNameWithCache API to find a metric name. This API uses a cache for inmemory blocks when accessing different blocks: https://github.com/VictoriaMetrics/VictoriaMetrics/blob/e84c87750357a289a280acc59a93b44d9d57d646/lib/mergeset/part_search.go#L306-L314
In case of downsampling and retention there can be a lot of metricIDs to check within a short period of time. This can lead to quick and uncontrollable growth of the cache size and lead to OOM errors.
Describe the solution you'd like
It would be great to add an option to skip caching for some types of searches for partSearch.getInmemoryBlock so that it will be possible to avoid cache poisoning for use-cases when it is expected to scan a large number of entries on purpose.
Describe alternatives you've considered
No response
Additional information
No response