Andrea Cossu

Results 47 comments of Andrea Cossu

1. The optimizer is automatically reset after each experience in the `Supervised` template (avalanche/training/templates/supervised.py). You can check the `make_optimizer` method in the supervised template. In this way, if your model...

@AntonioCarta suggested that it may be a model feature. This would require a similar process than the one used to build the `MultitaskModule` model.

As long as we still assign the returned values to `self` there should be no modifications to make for the evaluation module.

I added a warning for COPE in #1055 The bug still needs to be fixed

I agree that `TensorDict` (the official torch API for that is still experimental) or simply dictionaries are a more flexible choice to manipulate the minibatch information (x, y, task labels...

The current meaning is actually different: tick = we are able to reproduce the target performance of the reference paper (we do not necessarily use the same setup of the...

Yes, we can. I didn't want to clutter the table so I put the reference performance inside the comments in the experiments. I think we could create a separate table...

With paper reproductions do you also mean same hyperparameters as original paper? In the end, I think that is less interesting (and we would have only few strategies marked as...

I changed the table in the README. It now shows `Avalanche` when the experiment is not present in a specific paper. I also added the reference performance with the related...

Not really, we can speculate but nothing more at the moment.