how to use local model, already download from huggingface,
🐛 Describe the bug
such as LLM chatglm, embedding models bge
please refer- https://discuss.huggingface.co/t/run-models-on-a-desktop-computer/31125
@ssgupta905 this cannot solve , my question is to use LLM in embedchain project, not just use a LLM to inference,
`
import os
replace this with your HF key
os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "hf_xxxx"
from embedchain import App app = App.from_config("mistral.yaml") app.add("https://www.forbes.com/profile/elon-musk") app.add("https://en.wikipedia.org/wiki/Elon_Musk") app.query("What is the net worth of Elon Musk today?") Answer: The net worth of Elon Musk today is $258.7 billion. `
llm: provider: huggingface config: model: 'mistralai/Mistral-7B-Instruct-v0.2' top_p: 0.5 embedder: provider: huggingface config: model: 'sentence-transformers/all-mpnet-base-v2'
how to use LLM model already download at my pc in above code ?
I've encountered the same issue. I deployed my fine-tuned llama2 model using Xinference, but I haven't found a way to integrate it into Embedchain. Is there a solution for this?
Hey @hushi19910329, we recently added support for running local models from HF. Please see the docs here on how to use it: https://docs.embedchain.ai/components/llms#hugging-face-local-pipelines