adaptive
adaptive copied to clipboard
WIP: unify Learner1D, Learner2D and LearnerND
While writing the adaptive paper we managed to write down a simple algorithm formulated in terms of abstract domains and subdomains.
Implementing such an algorithm in a learner should allow us to unify Learner1D, Learner2D and LearnerND.
The following are not yet supported:
- 2D and 3D domains
- BalancingLearner (requires removing points from the interior of domains)
Also compared to the existing implementation, global rescaling is missing. (e.g. computing y-scale of the values, and normalizing the data by it)
Also compared to the existing implementation, global rescaling is missing. (e.g. computing y-scale of the values, and normalizing the data by it)
Should this perhaps be something that the loss function does itself? For example the isosurface loss needs the unmodified data. I could imagine extending the loss function signature to include the x and y scales.
I could imagine extending the loss function signature to include the x and y scales.
Indeed, but then the learner needs two extra hooks: one at each step to update global metrics, and another one to trigger loss recomputation for all subdomains once the global metrics changes sufficiently much so that old losses become obsolete.
Indeed, but then the learner needs two extra hooks: one at each step to update global metrics, and another one to trigger loss recomputation for all subdomains once the global metrics changes sufficiently much so that old losses become obsolete.
I added this now
TODO
- [x] don't evaluate boundary points in
__init__
- [ ] revisit the loss function signature
- [x] add tests
All other learners implement pending_points
which is a set.
Would that change anything? Now I see you set self.data[x] = None
.
All other learners implement pending_points which is a set. Would that change anything? Now I see you set self.data[x] = None.
I'm using pending_points
now
Where should we put the new LearnerND? Or maybe we should call it something different
I say, overwrite the other LearnerND.
Does everything already work?
I added the new learner to all the learner tests except the following
-
test_uniform_sampling
: this test is markedxfail
anyway -
test_point_adding_order_is_irrelevant
: This is marked xfail for Learner2D and LearnerND anyway, and I'm not sure if this behaviour should be satisfied. If we add points in a different order I think we can in principle end up with different triangulations (?)
The new learner is marked xfail on the following tests:
-
test_learner_performance_is_invariant_under_scaling
: Currently none of the loss functions scale the points (because I was lazy when implementing) but they probably should
Now I believe the only 3 things left to do are:
- [ ] decide on the loss function signature (at the moment the loss function gets all data etc.
- [ ] implement scaling within the loss functions
- [ ] whether to overwrite the existing LearnerND with this one (Bas says yes)
There's also the question of how to review this MR. It's a lot of code.
It may also be that this implementation is inferior to the current LearnerND, and we don't want to merge it at all. For example, the old LearnerND is 1180 lines, whereas this new implementation in 1350 (including the priority queue, domain definitions etc)
Re: loss function format
In ND we can pass the following data:
- The original simplex coordinates
- An array of the following tuples:
- Coordinates of the point being replaced
- Index of the simplex in which a point is replaced
- Number of the point being replaced
In 1D we probably should adopt the naturally ordered format.