adaptive
adaptive copied to clipboard
Generalize loss interpolation
(original issue on GitLab)
opened by Anton Akhmerov (@anton-akhmerov) at 2018-07-02T14:17:22.226Z
With gitlab:#52, we obtain a better universality for user-provided loss functions. However right now we always split the loss of the parent interval proportionally into the children intervals. This prevents the user from guaranteeing that the intervals never become shorter than a certain size (e.g. machine precision) by means of merely redefining the loss.
I am not quite sure how we should address this though.
originally posted by Jorn Hoofwijk (@Jorn) at 2018-07-13T08:56:20.027Z on GitLab
Maybe add a separate threshold parameter to the learner, by default being some small value, which indicates the minimal size of a simplex relative to the entire domain. Then as soon as the volume of a simplex drops below this threshold, we do not split it anymore, regardless of the loss.
Then some simplices could become smaller than the threshold, but they won't become indefinitely small
originally posted by Bas Nijholt (@basnijholt) at 2018-12-07T19:56:32.592Z on GitLab
Why can't one just set the loss to 0 for the interval that is "done"?
originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-12-07T20:17:13.051Z on GitLab
Are the learners guaranteed to ignore loss 0? Do we require or document that the loss is positive anywhere?
Also higher order interpolation schemes (e.g. cquad) would give loss estimates that vary within the interval, and linear interpolation isn't the correct thing to do then.