PyContinual icon indicating copy to clipboard operation
PyContinual copied to clipboard

Question on Backbone for Experiments

Open rahmahkn opened this issue 1 year ago • 3 comments

Hi! I have a question after seeing your Performance section on README. Why doesn't it show the result for continual learning models using BERT (without frozen)? Thanks in advance!

rahmahkn avatar Jun 27 '23 14:06 rahmahkn

Thank you for your interest and question.

We did have some trainable LM baselines like BERT NCL/ONE/MTL. However, you're correct in noting that we didn't have more baselines specifically designed for trainable BERT.

The primary reason for this is that many baselines (e.g., HAT) were not originally intended for LM, let alone trainable LM. The most straightforward approach to using them for LM would likely involve employing adapters or other parameter-efficient tuning methods (where the LM is fixed).

We may add more baselines that are with trainable LM later. Before that, if you're interested in trainable LM baselines, feel free to check our latest paper and code in the repository https://github.com/UIC-Liu-Lab/ContinualLM. Our focus of that repository is on trainable LM, particularly in the pre-training setting where training the full LM is typically required.

ZixuanKe avatar Jun 27 '23 23:06 ZixuanKe

Ah I see. Thank you for your response.

rahmahkn avatar Jun 28 '23 14:06 rahmahkn

By the way, I have another question regarding this sentence "The most straightforward approach to using them for LM would likely involve employing adapters or other parameter-efficient tuning methods (where the LM is fixed)."

Do you have any papers or other references about this? I am interested to know more about it. Thank you!

rahmahkn avatar Jul 10 '23 09:07 rahmahkn