continual-learning-baselines
continual-learning-baselines copied to clipboard
Continual learning baselines and strategies from popular papers, using Avalanche. We include EWC, SI, GEM, AGEM, LwF, iCarl, GDumb, and other strategies.
[Link to the paper](https://openaccess.thecvf.com/content_CVPRW_2020/html/w15/Lomonaco_Rehearsal-Free_Continual_Learning_Over_Small_Non-I.I.D._Batches_CVPRW_2020_paper.html)
[Link to the paper](https://openaccess.thecvf.com/content_CVPRW_2020/html/w15/Lomonaco_Rehearsal-Free_Continual_Learning_Over_Small_Non-I.I.D._Batches_CVPRW_2020_paper.html)
Some tests failed. Check server log for details.
This test reproduces the Generative Replay results of [this paper](https://arxiv.org/abs/1812.09111) using a VAE as generative model and 100 replay images. Closes #27.
The target results depend on the generator that is being used (as well as other factors, such as how much replay data is generated and whether the replay data is...
Link to paper: https://arxiv.org/pdf/2112.08654.pdf I noticed that L2P Strategy is already implemented in avalanche.
Some tests failed. Check server log for details.
Hi, I am running some experiments to compare synaptic intelligence and naive fine-tuning on different benchmarks, including SplitMNIST, PermutedMNIST, and a custom dataset benchmark of non-iid datasets. I observed that...
In experiments.slipt_mnist.naive module, the `'task_incremental': False` in line 18 is not corresponding to the `return_task_id=args.task_incremental` in line 24, which will cause `AttributeError: 'types.SimpleNamespace' object has no attribute 'task_incremental'` on my...
Currently, we still face a performance gap for some of the existing strategies. The expected performance can be found in the comments of the related `experiments` folder. Any help in...