prompt-tuning icon indicating copy to clipboard operation
prompt-tuning copied to clipboard

Original Implementation of Prompt Tuning from Lester, et al, 2021

Results 11 prompt-tuning issues
Sort by recently updated
recently updated
newest added

Hi, Thanks for reading my issue. I've learned from your amazing paper named The Power of Scale for Parameter-Efficient Prompt Tuning. I'm wondering whether I could train my prompt-based model...

Hi, Thank you for the simple yet elegant work! I wonder if you have encountered any convergence issue? I am finetuning a [XGLM(4.5B)](https://huggingface.co/facebook/xglm-4.5B) for a text generation task, with only...

Hi, thank you for this amazing project and for releasing the codes for reproducing your results. The ACL 2022 paper "SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer" says...

Great job and warehouse! I set 20 virtual tokens, used lr=1e-2, and randomly selected 3k instances from the CNN/DM training set to prompt tuning LLaMA-2-13B. I've seen a steady decrease...

Is this the first paper propose soft prompt?

I've seen the paper named **X-Gen: Zero-Shot Cross-Lingual Generation** which handle with low-resources language based on an operation made with prompt and so on, since this work is still in...

Hi, I am writing to ask if you can share the prompt tuning files used in your recent paper "SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer". As a...

Flax has removed optim in favor of optax in its newest versions above 0.5.3. This means that in order to run the code in this repository, one needs to downgrade...

Hi! I have read your work SPoT, but I can't understand how could you get prompt embedding of target task in the step of predicting task transferability! I see that...

Where can I find the lm-adapted code? I want to train a customized t5-lm model on my dataset. Thanks!