adaptive icon indicating copy to clipboard operation
adaptive copied to clipboard

Document and test loss function signatures

Open basnijholt opened this issue 6 years ago • 4 comments

(original issue on GitLab)

opened by Anton Akhmerov (@anton-akhmerov) at 2018-07-23T19:06:55.212Z

A loss function is a significant part of the interface of each learner. It provides the users with nearly infinite ways to customize the learner's behavior, and it is also the main way for the users to do so.

As a consequence I believe we need to do the following:

  • Each learner that allows a custom loss function must specify the detailed call signature of this function in the docstring.
  • We should test whether a learner provides a correct input to the loss function. For example if we say that Learner2D passes an interpolation instance to the loss, we should try and run Learner2D with the loss that verifies that its input is indeed an instance of interpolation. We did not realize this, but loss is a part of the learner's public API.
  • All loss functions that we provide should instead be factory functions that return a loss function whose call signature conforms to the spec. For example learner2D.resolution_loss(ip, min_distance=0, max_distance=1) does not conform to the spec, and is not directly reusable. Instead this should have been a functools.partial(learner2D.resolution_loss, min_distance=0, max_distance=1).
  • We should convert all our loss functions that have arbitrary hard-coded parameters into such factory functions, and we should test their conformance to the spec.

basnijholt avatar Dec 19 '18 16:12 basnijholt

originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-11-21T20:54:13.437Z on GitLab

Also we probably shouldn't be naming factory functions for loss functions get_XXX_loss.

basnijholt avatar Dec 19 '18 16:12 basnijholt

originally posted by Bas Nijholt (@basnijholt) at 2018-12-07T19:21:26.066Z on GitLab

@anton-akhmerov I think we addressed these points (except the second one) recently.

I don't really understand what you mean with

  • We should test whether a learner provides a correct input to the loss function. For example if we say that Learner2D passes an interpolation instance to the loss, we should try and run Learner2D with the loss that verifies that its input is indeed an instance of interpolation. We did not realize this, but loss is a part of the learner's public API.

Should we just check the data type? Is that what you mean? If so, why would this be useful?

basnijholt avatar Dec 19 '18 16:12 basnijholt

originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-12-07T20:31:19.691Z on GitLab

I think we addressed these points (except the second one) recently.

I cannot confirm that learners clearly document the loss format.

  • [ ] Learner1D
  • [ ] Learner2D. I may be overly nitpicky here, but the description seems rather vague. Also I think it should go into the parameters section and not the notes.
  • [ ] LearnerND

Did I miss any learner with customizable loss?

basnijholt avatar Dec 19 '18 16:12 basnijholt

originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-12-07T20:32:56.682Z on GitLab

Should we just check the data type? Is that what you mean? If so, why would this be useful?

I think that makes sense for the purpose of API stability.

basnijholt avatar Dec 19 '18 16:12 basnijholt