petals
petals copied to clipboard
Add convenience methods to export and import fine-tuned prompts
I can run"deep" prompt-tuning successfully.
but, how can I export the prefix tokens from the Petals client? and, how can I use the prefix tokens which export from Petals client when I want to do some inference job?
I appreciate you help
Hi @Ted-developer,
Use this for tuning_mode="ptune"
:
# Save trained prompts
with open('prompts.pt', 'wb') as f:
torch.save(model.transformer.prompt_embeddings, f)
# Load trained prompts
with open('prompts.pt', 'rb') as f:
model.transformer.prompt_embeddings = torch.load(f)
Or this for tuning_mode="deep_ptune"
:
# Save trained prompts
with open('prompts.pt', 'wb') as f:
torch.save((model.transformer.prompt_embeddings, model.transformer.intermediate_prompt_embeddings), f)
# Load trained prompts
with open('prompts.pt', 'rb') as f:
model.transformer.prompt_embeddings, model.transformer.intermediate_prompt_embeddings = torch.load(f)
We will add convenience methods for this in a future release.
Hi @Ted-developer,
Use this for
tuning_mode="ptune"
:# Save trained prompts with open('prompts.pt', 'wb') as f: torch.save(model.transformer.prompt_embeddings, f) # Load trained prompts with open('prompts.pt', 'rb') as f: model.transformer.prompt_embeddings = torch.load(f)
Or this for
tuning_mode="deep_ptune"
:# Save trained prompts with open('prompts.pt', 'wb') as f: torch.save((model.transformer.prompt_embeddings, model.transformer.intermediate_prompt_embeddings), f) # Load trained prompts with open('prompts.pt', 'rb') as f: model.transformer.prompt_embeddings, model.transformer.intermediate_prompt_embeddings = torch.load(f)
We will add convenience methods for this in a future release.
May I ask this feature has already been implemented? Can I use something like
model = DistributedBloomForCausalLM.from_pretrained(
tuning_mode="ptune"
...
)
trainer = transformers.Trainer(
model=model,
save_steps=200,
...
)
to save local models, and using trainer.train(resume_from_checkpoint=resume_from_checkpoint)
to load the saved models, just like the non-distributed version? Thanks!
Hi @huijiawu0,
No, this has not been implemented yet, you need to save and load the prompts manually for now. We'd be happy to accept a pull request fixing this.