TabPFNRegressor preprocessing fails on bigger datasets fix
This PR fixes #169
chnages made- took n_quantiles=min(n_quantiles, 10_000) in TabPFN/src/tabpfn/model/preprocessing.py
Thanks so much for this change! Would yu be able to add a test for this change, i.e. one that tests if the preprocessing runs on datasets of > 10,000 samples. We can't run the inference step unfortunately as ofc, it would be way too slow. Only way to test the inference on lareg datasets was if we provided a tiny tabpfn checkpoint, a very small model that is randomly initialized but that would be a project in itself.
@noahho I had added test_preprocessing.py please suggest changes if any
Great, this looks really good. There seems to be a tiny ruff issue at this point. Do you know how to resolve? "ruff check . --fix" with ruff version 0.8.6
Ohh also something that copilot just caught: The 'quantile_uni_coarse' transformer now caps n_quantiles to 10,000, yet the 'quantile_uni' transformer remains uncapped.
i will fix it now
@noahho I had ran ruff check . --fix but still ruff linting test is failing
The two open ones don't seem to be automatically fixable:
src/tabpfn/regressor.py:723:89: E501 Line too long (89 > 88)
tests/test_preprocessing.py:12:9: NPY002 Replace legacy np.random.rand call with np.random.Generator
An LLM will know how to fix number 2 and by deleting a character in src/tabpfn/regressor.py:723:89 you fix no1
@noahho Please review
Thanks a lot for continuing to work on this. It seems there were a few changes made for the linting that weren't right (such as adding ""). I'll look into the PR and fix those things, if you'd like me to.
Yes ,sure
Any update on this PR? #169 already solved?