Optimox

Results 117 comments of Optimox

I am not sure actually, what happens if you set the verbosity to 0? or -1 ?

This comes from the History Callback : https://github.com/dreamquark-ai/tabnet/blob/2c0c4ebd2bb1cb639ea94ab4b11823bc49265588/pytorch_tabnet/callbacks.py#L176 You could change this to remove prints. You would have to change the source code, but it is an easy change.

Hello, I am not sure that I understand your request. But if you want to use the tabnet network simply as a pytorch module and insert it inside your own...

This is an advanced feature, you can leave to None, otherwise you'll need to dig a bit into the code to use it. It's just a matrix of weights on...

Pretraining can be tricky, see my [here](https://github.com/dreamquark-ai/tabnet/issues/571#issuecomment-2739674713). Loss is also very dependent on dataset distribution (deviation between samples inside dataset), unfortunately I don't have much tips to share. Please share...

Above 1.0 is still a bit disappointing even though it seems your pretraining is not doing worse than random, which is still something... The only way to monitor if the...

n_d, n_a and n_steps are definitely the most important parameters of the architecture. But learning rate (and batch size) also plays a great role on how the training goes, I...

Could you indeed monitor both your training and validation set so that you can compare the same metric, not the loss vs the metric?

Feel free to open a PR that I'll make sure to review carefully.

@hamaadshah any update on this ?