anki-optimization
anki-optimization copied to clipboard
The shape of the forgetting curve
Given the state of this repository, Shall we use github issues to host discussions on the subject? Ebbinghaus forgetting curve followed a power function, but I think that you have in mind the exponential decay, such as in Anki or Supermemo, right ?
use github issues to host discussions on the subject
Yes.
forgetting curve followed a power function
The forgetting curve was originally modeled with an exponential function: e^-(t*a)
.
In fact, I don't indent to reproduce either Anki's or SM's behavior (especially not supermemo) at all, the resulting "optimal" forgetting function might not even have an expression in analytic form.
In fact, I don't indent to reproduce either Anki's or SM's behavior (especially not supermemo) at all, the resulting "optimal" forgetting function might not even have an expression in analytic form.
Interesting! Still, I maintain my point on Ebbinghaus :)
I'd appreciate a source then.
Fair enough. The best source I can find is the original work. The trick is that Ebbinghaus never modeled retention per say, so never said one or the other. But the power law is closer than the exponential law from the formula he suggested (and also from his data, but I lost track of the paper showing that). The amount of reviews required to relearn his list. b/v = k/(log t)^c Where:
- t the time in minutes counting from one minute before the end of the learning,
- b the saving of work evident in relearning, the equivalent of the amount remembered from the first learning expressed in percentage of the time necessary for this first learning,
- c and k two constants
- v = 100 - v (expressed in percentage) That's where the equation stops. Ebbinghaus did not suggest a formula for the retention. But if you express retention as v/100 (which he somewhat suggested as a rewording of v) you got
(100-p100)/p100 = k/(log t)^c which boils down to a general form of p = K*log(t)^(-c) + K2 This is neither power law nor exponential law, yet the power law seems a bit closer (because of the log(t) component, decay is way much slower on the long run than exponential low.
Concerning the data, I have read a paper that compared various forgetting curve and also made them fit on Ebbinghaus data, power law worked better. I am sorry I did not find this article. But you have a plot of the dataset on the supermemo website https://www.supermemo.com/pl/articles/history (more specifically : https://www.supermemo.com/smcom/articles/history1/images/6/64/Ebbinghaus_forgetting_curve_%281885%29%28power_regression%29.jpg)
brumar, I think you have a typo or two
should be: v = 100 - b ;instead of v = 100 - v
(b = success rate as percentage; v = failure rate as percentage)
(also assuming your 'p' is probability of recal not expressed as a percentage)
In that case, if b/v = k/log(t)^c then shouldn't your equation be: p100/(100 - p100) = k/log(t)^c
which naturally is 100p/(100*(1-p)) = k/log(t)^c p/(1-p) = k/log(t)^c
Thanks, you are right!