llmware icon indicating copy to clipboard operation
llmware copied to clipboard

Local LLM could not connect

Open shamin10 opened this issue 2 years ago • 6 comments

Hi, thank you for the wonderful this code. I have downloaded the model from huggingface but when i try to load from prompt load , I could not be able to load Can you pls help me. I don't want to load model thru huggingface app key

shamin10 avatar Dec 19 '23 06:12 shamin10

Thanks for the feedback. Sorry you have run into an issue. Which model are you trying to use?

doberst avatar Dec 19 '23 16:12 doberst

Thank you I'm trying to use bling-sheared-llama-1.3b-0.1 I have downloaded this model to my PC. I want to use

Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1") model = AutoModelForCausalLM.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1")

I changed the path to my c drive . But getting error , seemilike I have to have huggingface api token?

shamin10 avatar Dec 21 '23 17:12 shamin10

Hmm, can you provide the error? I ran those three lines of code, and it seems to download the model fine.

philipkd avatar Jan 09 '24 19:01 philipkd

where you able to get the model running? Were you able to run the model outside the frameware with the ollama command?

chair300 avatar Feb 29 '24 19:02 chair300

If I understand the OP correctly, then I want to know this, too. How do I load a model from a non-standard location on my local drive? It's a GGUF, and it's not in the huggingface cache system at all. load_model() seems to expect a huggingface model path.

Reference issue: https://github.com/llmware-ai/llmware/issues/433

JeremyBickel avatar Mar 14 '24 20:03 JeremyBickel

Thank you I'm trying to use bling-sheared-llama-1.3b-0.1 I have downloaded this model to my PC. I want to use

Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1") model = AutoModelForCausalLM.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1")

I changed the path to my c drive . But getting error , seemilike I have to have huggingface api token?

haved you solved the problem?

joyce0105-ops avatar Jul 24 '24 08:07 joyce0105-ops