langchain
langchain copied to clipboard
Add HuggingFacePipeline LLM
https://github.com/hwchase17/langchain/issues/354
Add support for running your own HF pipeline locally. This would allow you to get a lot more dynamic with what HF features and models you support since you wouldn't be beholden to what is hosted in HF hub. You could also do stuff with HF Optimum to quantize your models and stuff to get pretty fast inference even running on a laptop.
#354
Add support for running your own HF pipeline locally. This would allow you to get a lot more dynamic with what HF features and models you support since you wouldn't be beholden to what is hosted in HF hub. You could also do stuff with HF Optimum to quantize your models and stuff to get pretty fast inference even running on a laptop.
Let me know if you want this code and I will clean it up.
want it? i need it!
but in all seriousness, i love it, and i know others will as well (has been a common ask) - thank you!!!
@hwchase17 Last issue I am having is that the integration tests now require torch to be installed for HF pipelines to run. I am trying to add it to poetry but I might do it incorrectly. Any examples of that I can look at?
So I added torch to the pyproject.yaml and am still seeing errors for torch missing in the poetry environment. Not really sure if I'm missing something or not.
test_huggingface_pipeline.py::test_huggingface_pipeline_text_generation FAILED [100%]
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
@hwchase17 integration tests now pass locally! Want to take one more look?
@hwchase17 One last time!
ah very close! just some lint. lmk if you want me to tackle
hoping to get this in release tmrw. whats your twitter? want to give you a big shoutout
@hwchase17 got it. take a look.
I have no twitter but definitely shout out my company's socials https://twitter.com/YouSearchEngine