avalanche icon indicating copy to clipboard operation
avalanche copied to clipboard

IncrementalClassifiers inside MultiHeadClassifiers are adapted on all experiences

Open AlbinSou opened this issue 5 months ago • 2 comments

🐛 Describe the bug MultiHead classifier contains a dictionnary of Incremental classifiers, both MultiHeadClassifier and IncrementalClassifier are DynamicModules, so they get adapted through the following loop:

def avalanche_model_adaptation(model: nn.Module, experience: CLExperience):
    if isinstance(model, DistributedDataParallel):
        raise RuntimeError(
            "The model is wrapped in DistributedDataParallel. "
            "Please unwrap it before calling this method."
        )
    for module in model.modules():
        if isinstance(module, DynamicModule):
            module.adaptation(experience)

However, we would like the IncrementalClassifiers inside the MultiHeadClassifier to be adapted only on experiences that contain the task it corresponds to, just like it is called inside the MultiHeadClassifier

One solution would be to add an option "adaptable" in Incremental classifiers and turn this off when they are created in MultiHeadClassifiers.

AlbinSou avatar Feb 10 '24 10:02 AlbinSou

This is an issue indeed. I agree with your solution of disabling adaptation.

Another issue that we have is that the model adaptation must be called on each module separately. This is a frequent source of errors that I see in the github discussions (calling model.adaptation instead of model_adaptation or having a for loop). Maybe the adaptation method should automatically call its children? I'm thinking of an API like this:

class DynamicModule:
    def adaptation(self, exp):  # called by users and avalanche strategies. Adapts children too
    def _module_adaptation(self, exp):  # adapt itself
    def enable_adaptation(self, is_enabled):  # enable/disable adaptation  

and maybe IncrementalClassifier could have a method grow_classifier(new_units) that can be called by the MultiTaskClassifier that can be called avoiding the adaptation?

AntonioCarta avatar Feb 12 '24 08:02 AntonioCarta

This is an issue indeed. I agree with your solution of disabling adaptation.

Another issue that we have is that the model adaptation must be called on each module separately. This is a frequent source of errors that I see in the github discussions (calling model.adaptation instead of model_adaptation or having a for loop). Maybe the adaptation method should automatically call its children? I'm thinking of an API like this:

class DynamicModule:
    def adaptation(self, exp):  # called by users and avalanche strategies. Adapts children too
    def _module_adaptation(self, exp):  # adapt itself
    def enable_adaptation(self, is_enabled):  # enable/disable adaptation  

and maybe IncrementalClassifier could have a method grow_classifier(new_units) that can be called by the MultiTaskClassifier that can be called avoiding the adaptation?

Interesting, yes I agree that it would be less confusing and more usable "out of the box". I will try to do something like that.

AlbinSou avatar Feb 16 '24 12:02 AlbinSou