adaptive
adaptive copied to clipboard
Consider allowing current learners to work with functions defined on integers
(original issue on GitLab)
opened by Anton Akhmerov (@anton-akhmerov) at 2018-07-02T14:11:30.824Z
I'm not actually sure how to do it, but for instance I could imagine studying how things depend on the system size, which is a finite number (but large) of the unit cells.
originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-08-23T11:46:41.743Z on GitLab
For the current learners (1D, 2D, ND) this would amount to two changes:
- Changing the way a point is selected in an interval or simplex (we'd find all points that belong to the simplex and then select the one closest to the one a continuum algorithm would choose.
- Updating the loss to be zero if there are no available points inside an interval.
originally posted by Piotr (@Benedysiuk) at 2018-08-23T14:42:27.627Z on GitLab
When choosing the point there is some subtleties when working in higher (integer) dimensions. Most likely one needs to get involved with branch and bound methods - and those assure optimal choice only if the object function (loss) is linearly dependent on the variables.
originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-08-23T16:02:57.980Z on GitLab
Ah, but that's mitigated by the current learners having a generally poor performance in higher dimensions due to the curse of dimensionality.
We could extend this to be functions defined on lattices in R^N. This way we can do things like describe experiments where there is a minimum granularity defined by the measurement apparatus.
I realise we could do the same thing with functions defined on integers + a scaling, but allowing functions defined on lattices seems neater.