Local LLM could not connect
Hi, thank you for the wonderful this code. I have downloaded the model from huggingface but when i try to load from prompt load , I could not be able to load Can you pls help me. I don't want to load model thru huggingface app key
Thanks for the feedback. Sorry you have run into an issue. Which model are you trying to use?
Thank you I'm trying to use bling-sheared-llama-1.3b-0.1 I have downloaded this model to my PC. I want to use
Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1") model = AutoModelForCausalLM.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1")
I changed the path to my c drive . But getting error , seemilike I have to have huggingface api token?
Hmm, can you provide the error? I ran those three lines of code, and it seems to download the model fine.
where you able to get the model running? Were you able to run the model outside the frameware with the ollama command?
If I understand the OP correctly, then I want to know this, too. How do I load a model from a non-standard location on my local drive? It's a GGUF, and it's not in the huggingface cache system at all. load_model() seems to expect a huggingface model path.
Reference issue: https://github.com/llmware-ai/llmware/issues/433
Thank you I'm trying to use bling-sheared-llama-1.3b-0.1 I have downloaded this model to my PC. I want to use
Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1") model = AutoModelForCausalLM.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1")
I changed the path to my c drive . But getting error , seemilike I have to have huggingface api token?
haved you solved the problem?