adaptive icon indicating copy to clipboard operation
adaptive copied to clipboard

Generalize loss interpolation

Open basnijholt opened this issue 5 years ago • 3 comments

(original issue on GitLab)

opened by Anton Akhmerov (@anton-akhmerov) at 2018-07-02T14:17:22.226Z

With gitlab:#52, we obtain a better universality for user-provided loss functions. However right now we always split the loss of the parent interval proportionally into the children intervals. This prevents the user from guaranteeing that the intervals never become shorter than a certain size (e.g. machine precision) by means of merely redefining the loss.

I am not quite sure how we should address this though.

basnijholt avatar Dec 19 '18 16:12 basnijholt

originally posted by Jorn Hoofwijk (@Jorn) at 2018-07-13T08:56:20.027Z on GitLab

Maybe add a separate threshold parameter to the learner, by default being some small value, which indicates the minimal size of a simplex relative to the entire domain. Then as soon as the volume of a simplex drops below this threshold, we do not split it anymore, regardless of the loss.

Then some simplices could become smaller than the threshold, but they won't become indefinitely small

basnijholt avatar Dec 19 '18 16:12 basnijholt

originally posted by Bas Nijholt (@basnijholt) at 2018-12-07T19:56:32.592Z on GitLab

Why can't one just set the loss to 0 for the interval that is "done"?

basnijholt avatar Dec 19 '18 16:12 basnijholt

originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-12-07T20:17:13.051Z on GitLab

Are the learners guaranteed to ignore loss 0? Do we require or document that the loss is positive anywhere?

Also higher order interpolation schemes (e.g. cquad) would give loss estimates that vary within the interval, and linear interpolation isn't the correct thing to do then.

basnijholt avatar Dec 19 '18 16:12 basnijholt