Use variable-length strains to fix chunking-related issues
- Closes https://github.com/ppy/osu/issues/25254
Solves chunking-related issues by starting a new chunk for every object, and allowing chunks to be different lengths. This is retrofitted to existing calculations, and doesn't have other ramifications in the way something like tr3s's continuous strains do.
The effect of this can be seen in the video below. In live pp, the map would lose sr when increasing the rate at certain points, but in the video, the sr rises consistently as expected.
https://github.com/user-attachments/assets/2d58946a-9e0c-4f6b-912a-71dfe75c0f8a
A multiplier has been added inside DifficultyValue() to account for SR being slightly lower from more granular summation.
When testing this rework in PerformanceCalculatorGUI, I recommend using this fork. I can't guarantee that the visuals it shows are correct, but they are definitely more correct than not.
Edit: Use this fork for testing now. Old one is now for testing SynchronizedVariableLengthStrainSkill
!diffcalc RULESET=osu OSU_A=https://github.com/ppy/osu/tree/pp-dev OSU_B=https://github.com/ppy/osu/pull/33351 SCORE_PROCESSOR_A=https://github.com/tsunyoku/osu-queue-score-statistics/tree/add-hd-so-to-relevant-legacy-mods SCORE_PROCESSOR_B=https://github.com/tsunyoku/osu-queue-score-statistics/tree/add-hd-so-to-relevant-legacy-mods
Target: https://github.com/ppy/osu/pull/33351 Spreadsheet: https://docs.google.com/spreadsheets/d/1PKnPRpGQXTiRX9NEMvysHJaaSLduinJ4fVkq2U3f_Us/edit
@tsunyoku should I be worried about the massive wall of failed tests in OsuVariableLengthStrainSkill.cs? Edit: found the issue, yes I needed to be worried
Conflicts need to be resolved. Otherwise, I have a few things to say.
I'm still not sure how we can improve comprehensibility of this change. Other than me and you, I'm not sure there's anyone who fully understands what this is doing, why it's doing it, and even though I've commented the shit out of it to explain what each line is doing it doesn't really give detail as to why we want to do that. I don't think I'm going to be able to do anything about that myself, please try to layman the entire concept and put it in a comment, google doc or something similar and I will try to adapt that. This is going to be needed for the newspost too if we want to get this in.
I don't like WeightedSumTimeOffset. I'd sooner remove any bandaids trying to make values remain 1:1 and just allow for slight value shifts at the cost of making this future-proof. I don't see any reason to be so set on making things identical other than it being easier to understand right now. My opinion on this may change if the removal of any bandaid measures makes large differences that can't be offset with the global multiplier, but that would be my desired way of dealing with this. Please try some things locally without any specific bandaid measures and report back what I should be expecting.
Replacing WeightedSumTimeOffset with a simple multiplier housed within DifficultyValue() has essentially zero difference in values (largest I saw was -5 on toromivana). I can do that if needed.
I know you don't want to add another multiplier, but there's no existing multiplier that can do this. Omitting the multiplier without a replacement causes profile values to drop significantly (-500 on my profile), changing the multiplier in OsuRatingCalculator affects flashlight and top weighted strains. Anything contained to DifficultyCalculator or RatingCalculator will affect top weighted strains.
I will say, afaik functionally the only difference between using a multiplier and just using the offset is that you need to calculate a good multiplier if you ever change DecayWeight (which catch does). It's not really a big deal, but if you need the multiplier to make the values right anyways, why not just use the offset?
I'm trying to prevent attempts at balancing against this against current values being a long-term stain on PP. Multiplier adjusting does a far better job at that, since they get changed over time anyways.
PP doesn't need to be identical. The nature of this change is going to cause random deltas anyway. As long as things are +- a few PP (other than intentionally larger diffs) then it's not really worth stressing over.