Rupesh Sapkota

Results 11 comments of Rupesh Sapkota

Maybe it would be nice to rename the script as well. The execute function in the model_adapter is directly called from the main.py, and once we remove the model_adapter function,...

We need a description tag for the progress bar : Is "Training progress" a suitable description for the progress bar?

As per the PL trainer, they do not seem to be using a description tag. The way progress bar will be shown for the two trainers is also different, PL...

The major issue here with TorchCPUTrainer is that the logs ( epoch no, batch no, batch loss etc) are printed after execution of each batch of each epochs, which forces...

Can you check at https://github.com/dice-group/dice-embeddings/tree/tqdm-support . For now I have just added the progress bar to the loop running all the batch and epochs.

@Demirrr The issue seems to be based on how PL tracks the optimizers. PL initializes a list of optimizers and uses index to access them, however TorchCPUTrainer only keeps single...

The check with the PL trainer is a temporary fix. We can implement SWA locally, and we would have more flexibility with that. However, we might need to add some...

I could not access function variables like `on_train_start` or `on_train_end` while using TorchCPUTrainer (these are what I referred to as training hooks). However, we can add these via Abstract Trainer...

I will look into implementing SWA as a callback locally.

I was looking at the implementation from PL Lightning. They have some added features.