Expertium
Expertium
Another idea: we could ask Dae for a new dataset (again), but this time ask him to calculate retention for each user and make sure the dataset has the following...
That would just make the metrics worse without really telling us much otherwise. I think we need a new dataset as described here: https://github.com/open-spaced-repetition/srs-benchmark/issues/166#issuecomment-2614417100 Then we can test whether the...
@1DWalker about this: https://github.com/open-spaced-repetition/srs-benchmark/issues/166#issuecomment-2614160163 On second thought, I'm not sure if this is a good idea. It will just make FSRS oscillate between "Good at retention=x% and bad at retention=y%"...
Well, nevermind, this new dataset isn't nearly uniform enough despite my best efforts Users=1000 10th percentile=46.4% 20th percentile=52.6% 30th percentile=56.3% 40th percentile=58.8% 50th percentile=61.2% 60th percentile=63.8% 70th percentile=69.9% 80th percentile=80.0%...
Alright, I've made a very small 100 users dataset where retentions are distributed quite uniformly across users: 10th percentile=23.2% 20th percentile=31.9% 30th percentile=35.3% 40th percentile=40.0% 50th percentile=50.0% 60th percentile=60.0% 70th...
Huh, interesting. Have you played around with this dataset? https://github.com/open-spaced-repetition/srs-benchmark/issues/166#issuecomment-2622748214
I meant just running algorithms on it
So yes, flatter curves are more optimal at higher retentions. I'd say let's keep the current value. We *could* run the optimizer multiple times, each time with a fixed value...
@L-M-Sherlock can you share the numbers from the graph here? https://github.com/open-spaced-repetition/srs-benchmark/issues/166#issuecomment-2652751232
I don't see decay there