Ankh
Ankh copied to clipboard
Fine-tuning multilabel classification
Hello,
Thanks for your work! I am trying to fine-tune the model on some of my data on a multilabel classification task. From my understanding of the example notebooks, the downstream architectures always take embeddings (generated by ankh-large or ankh-base) as inputs and not tokenized sequences directly.
How can one train the full network (and thus modify the embeddings representations given to a sequence) for a downstream task?
My question is simple and might have an obvious response and I'm sorry if it's the case.
Thanks a lot for your help!