tabnet icon indicating copy to clipboard operation
tabnet copied to clipboard

Lightweight Fine-tunning or few-shot learning for limited labeled data

Open Septimus2024 opened this issue 1 year ago • 1 comments

Feature request

After semi-supervised pretraining, can we do light-weighted fine-tunning or few-shot learning instead of classification?

What is the expected behavior? Instead of fine-tuning on decent amount of labeled data. Is it possible to do some light-weight fine-tuning (e.g., fine-tunning on less than 100 labeled data) or doing few-shot learning instead of classification?

What is motivation or use case for adding/changing the behavior? Only have limited labeled data to fine-tune model.

How should this be implemented in your opinion? For few-shot learning, maybe change the loss function.

Are you willing to work on this yourself? yes

Septimus2024 avatar Feb 29 '24 19:02 Septimus2024

feel free to open a PR with a concrete proposition.

Optimox avatar Mar 25 '24 09:03 Optimox