Optimox
Optimox
A paint brush would be very helpful!
Hello @szilard, Nice work on the benchmark! Here is a list of thoughts/questions I have by reading all your comment: - first of all, TabNet is definitely slower than XGBoost,...
@szilard did you change the scheduler when reducing number of epochs? Also I guess this will be more beneficial with the bigger dataset. With low number of epochs you can...
@SquareGraph We made a release yesterday with potentially breaking changes. We went from v3.x to v4.x Can you make sure that your code is working with previous tabnet version 3.1.1...
The breaking change that we did is to add a `warm_start` so that the library is now using scikit convention. So if your pipeline used to have multiple consecutive call...
@SquareGraph what do you mean by "The problem also occurs if I generate random data in a new notebook." ? How can you spot that training is not working with...
@SquareGraph please reopen once you have more information to share about new behavior of your code.
can you share a minimal reproducible code ? With just random data as input but just to show when the error happens and what are the different sizes of everything...
@noahlh good to know that you managed to make things work. About the discrepancy between loss and train loss I see several reasons: - the loss is accumulated during the...
Thanks for pointing out the bug, glad that you can now properly train your tabnet model. Just out of curiosity, have you been able to benchmark tabnet with other models?...