Promptify
Promptify copied to clipboard
Support Huggingface models
Hi, Thank you for such great work.
Could you please add huggingface models or any other available models on the web to this package so that we can test it with open-source models?
Hi, Are you referring to open-source LLM models or Hugginface transformer models such as Bert etc?
Hey :wave: Supporting other backends than OpenAI seems a great idea for this library. Huggingface's hub contains a large variety of open-source models that users could use (here is a list of all text-to-text generation models). On top of the Hub, HF provides a free Inference API as well as a product (Inference Endspoints) to deploy models on a dedicated server. This flexibility allows users to test models (including their own private ones) and use them in production once they are ready for it.
Those APIs could be leveraged by promptify
using the huggingface_hub
library or HTTP requests directly:
>>> from huggingface_hub.inference_api import InferenceApi
>>> inference = InferenceApi(repo_id="bigscience/bloom", token=HF_API_TOKEN)
>>> response = inference(inputs="The goal of life is")
>>> response[0]["generated_text"]
'The goal of life is to live it, to taste experience to the utmost, to reach out eagerly and without fear for'
Disclaimer: I work at HF and I am maintainer of the huggingface_hub
library. I'd be happy to help on the integration by answering questions or working on a PR :)
Hi, Are you referring to open-source LLM models or Hugginface transformer models such as Bert etc?
Yes, I agree with @Wauplin that supporting other backends than OpenAI would be great for this library. Many open-source models are available on Huggingface's hub for users to use.
@Wauplin Thank you for reaching out and contributing. We would be happy to provide support for the HF. It's a great idea. Please feel free to open PR about this.
Looking forward to see this ticket to be implemented, really great idea. @Wauplin is on it as far as I understand, is that correct?
I'll work on it this week and keep you updated :)
Drafted a PR https://github.com/promptslab/Promptify/pull/13 for it. Needs more refinements (docs, examples,...) but feels free to check the notebook example and give some feedback.
Drafted a PR #13 for it. Needs more refinements (docs, examples,...) but feels free to check the notebook example and give some feedback.
Hi @Wauplin Great work and you're superfast indeed :)
Where should you define the used model? in the HubModel()
module?
Could you please provide a more comprehensive example with more parameters and documentation?
Wow, that's super fast, I'm looking forward to try hf models with Promptify :)
Where should you define the used model? in the HubModel() module?
At the moment I copied the openai model that has a model_name
argument in the run()
method.
But I have to admit I would find it more consistent to do something like
HubModel("google/flan-t5-xl", api_key="xxx").run(prompt)
instead of
HubModel(api_key="xxx").run(prompt, model_id="google/flan-t5-xl")
e.g. first provide all the information describing the model/endpoint and then only arguments specific to the inference itself (prompt, temperature,...) What do you think about it? I don't want to overstep on existing code here.
Could you please provide a more comprehensive example with more parameters and documentation?
Yes sure!
Where should you define the used model? in the HubModel() module?
At the moment I copied the openai model that has a
model_name
argument in therun()
method. But I have to admit I would find it more consistent to do something likeHubModel("google/flan-t5-xl", api_key="xxx").run(prompt)
instead ofHubModel(api_key="xxx").run(prompt, model_id="google/flan-t5-xl")
e.g. first provide all the information describing the model/endpoint and then only arguments specific to the inference itself (prompt, temperature,...) What do you think about it? I don't want to overstep on existing code here.
I think the second one (the consistent approach with the HF model) is much better.
Could you please provide a more comprehensive example with more parameters and documentation?
Yes sure!
I think the second one (the consistent approach with the HF model) is much better.
@behroozazarkhalili (cc @monk1337) I have updated my PR (https://github.com/promptslab/Promptify/pull/13) accordingly and added more details/parameters in both the docstrings and the notebook guide. I've switched the PR to "ready for review", comments are of course welcomed :)
@Wauplin Thank you for your great contribution. I am working on changing the model function structure, as you suggested. HubModel("google/flan-t5-xl", api_key="xxx").run(prompt)
format make more sense.
Ok, please ping me if you need help from my side :)
I think the second one (the consistent approach with the HF model) is much better.
@behroozazarkhalili (cc @monk1337) I have updated my PR (#13) accordingly and added more details/parameters in both the docstrings and the notebook guide. I've switched the PR to "ready for review", comments are of course welcomed :)
Great work, Thank you for the update.
@Wauplin, check the new changes
https://github.com/promptslab/Promptify/blob/bc94ca6ad35a21f56de3a8e6baca15f8441cbc6c/promptify/models/nlp/openai_model.py#L18
Do you have other suggestions, or can we go with this format? if yes, please re-format the HF model code according to the new format.
Merged the PR. Try the new feature :)
@Wauplin @monk1337 Thank you so much. Could you please provide the HF example as a new example in the package's documentation?