deep-symbolic-optimization
deep-symbolic-optimization copied to clipboard
Might be worth mentioning SRBench results
https://github.com/cavalab/srbench#benchmarked-methods
From an initial glance, *DSR seems to do pretty well https://arxiv.org/abs/2107.14351
This is a great idea, thanks! We should add this to the README. Our updated version for the symbolic regression task called uDSR, just published in NeurIPS 2022 (code update to follow), tops SRBench by a large margin: https://openreview.net/forum?id=2FNnBhwJsHK
We updated our repo and now include those results in our README :)
How close is this release to what was used in the benchmark and NeurIPS2022? For example, I see the linear token but does the code use AI Feynman to create subproblems?
@tluchko Good point, unfortunately we weren't able to include the AIF or LSPT components in this release. Though if you look at the ablations in our paper, it hardly makes a difference once you have enough components. Probably just better off running DSR + GP + LINEAR (which are all included in this release) for longer.