astartes
astartes copied to clipboard
[FEATURE]: Morais-Lima-Martin (MLM) Sampler
Random-mutation variant of the Kennard-Stone algorithm: article link
Reference implementation which is unfortunately available only as MATLAB p-files (encrypted).
Thanks for sharing this paper! To summarize our preliminary discussion, my initial thought is that I see no reason to prefer this method over Kennard-Stone (KS) or random sampling (RS):
- KS enforces interpolation since it takes the points furthest away from each other in the X space and places them in the training set, which causes the testing set to be entirely contained within the space of the training set. However, this rigorous enforcement comes at the cost of scaling as O(N^2) since that’s simply the cost of computing a distance matrix. It's up to users to decide if this cost is worth it.
- RS often results in similar interpolation splits since random sampling often causes the training and testing set to have similar distributions, especially if the original dataset is large enough. Importantly the computational cost is dramatically lower.
Conceptually, I’m currently not convinced that blending them offers any advantage. After all, why would we eat the O(N^2) cost to enforce interpolation but then use random splitting to undo that rigorous assignment? If the hypothesis is that MLM leads to better testing set performance, this seems at odds with the results presented in this paper. Table 1 shows that MLM doesn’t help for datasets 2, 3, 4, and 5. This is often true when looking at Table 2 as well. Reporting the mean +- standard deviation from the cross-validation would have also helped in interpreting these results.
It would be interesting to read this paper more closely to better understand. It would also be useful to perform additional analysis with their code and datasets. For example, it would be interesting to compare their implementation to 2 function calls from astartes: first call KS then pass the resulting indices to RS.