mem0 icon indicating copy to clipboard operation
mem0 copied to clipboard

how to use local model, already download from huggingface,

Open sporterman opened this issue 1 year ago • 4 comments

🐛 Describe the bug

such as LLM chatglm, embedding models bge

sporterman avatar Jan 25 '24 09:01 sporterman

please refer- https://discuss.huggingface.co/t/run-models-on-a-desktop-computer/31125

ssgupta905 avatar Jan 26 '24 11:01 ssgupta905

@ssgupta905 this cannot solve , my question is to use LLM in embedchain project, not just use a LLM to inference,
` import os replace this with your HF key os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "hf_xxxx"

from embedchain import App app = App.from_config("mistral.yaml") app.add("https://www.forbes.com/profile/elon-musk") app.add("https://en.wikipedia.org/wiki/Elon_Musk") app.query("What is the net worth of Elon Musk today?") Answer: The net worth of Elon Musk today is $258.7 billion. `

llm: provider: huggingface config: model: 'mistralai/Mistral-7B-Instruct-v0.2' top_p: 0.5 embedder: provider: huggingface config: model: 'sentence-transformers/all-mpnet-base-v2'

how to use LLM model already download at my pc in above code ?

sporterman avatar Jan 29 '24 07:01 sporterman

I've encountered the same issue. I deployed my fine-tuned llama2 model using Xinference, but I haven't found a way to integrate it into Embedchain. Is there a solution for this?

hushi19910329 avatar Mar 06 '24 16:03 hushi19910329

Hey @hushi19910329, we recently added support for running local models from HF. Please see the docs here on how to use it: https://docs.embedchain.ai/components/llms#hugging-face-local-pipelines

deshraj avatar Mar 06 '24 18:03 deshraj