AFT
AFT
We created an easy to use adaptation for tabular data: [mambular](https://github.com/basf/mamba-tabular) Not quite finished yet, but the initial results are quite promising :)
Lets maybe have an initial discussion on how to include HPO into the framework
V1.0.0 includes built-in bayesian HPO and is fully integratable with sklearn hyperparameter optimizers
Orientate on implementation for TabR here: https://github.com/yandex-research/tabular-dl-tabr/blob/main/bin/tabr.py - Adjust training loop, to have forward pass take (x_num, x_cat, targets) as input.
TabR is now included in the Github version, see #258. It will be included in the next pypi release
Please provide a minimal code example to recreate the error.
Since there was no further information provided and we have not received any other issues in this regard, I will close this issue. Feel free to reopen it and providing...
It is expected that Mambular is slower than e.g. FT-Transformer, especially for datasets with a lot of features, since training time increases linearly with sequence length (number of features). However,...
I could not recreate the extreme differences you reported, but still using default Mambular was 10x slower than FTTransformer for this specific setup. We will update the current Mambablock implementation...
If you experiment further you could -instead of the python mamba implementation from Mambular- try out the original Mamba implementation: https://pypi.org/project/mamba-ssm/ If you do so, please let us know whether...