generativeAgent_LLM
generativeAgent_LLM copied to clipboard
Implementation of "Generative Agents: Interactive Simulacra of Human Behavior" paper with Guidance and Langchain. Full features and work with local LLMs.
SafetensorError Traceback (most recent call last) Cell In[2], line 11 7 # MODEL_PATH = '/home/quang/working/LLMs/oobabooga_linux/text-generation-webui/models/TheBloke_Wizard-Vicuna-7B-Uncensored-GPTQ' 8 # CHECKPOINT_PATH = '/home/quang/working/LLMs/oobabooga_linux/text-generation-webui/models/TheBloke_Wizard-Vicuna-7B-Uncensored-GPTQ/Wizard-Vicuna-7B-Uncensored-GPTQ-4bit-128g.no-act-order.safetensors' 10 DEVICE = torch.device('cuda:0') ---> 11 model, tokenizer = load_model_main(MODEL_PATH,...
model=TheBloke_Llama-2-13B-GPTQ/model.safetensors, I also tried: Wizard-Vicuna-7B-Uncensored-GPTQ-4bit-128g.no-act-order.safetensors, same problem ``` Loading model ... --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[36], line 5 1 # MODEL_PATH = '/home/quang/working/LLMs/oobabooga_linux/text-generation-webui/models/TheBloke_Wizard-Vicuna-7B-Uncensored-GPTQ' 2 # CHECKPOINT_PATH...
Hi, can someone share requirements.txt , env.yaml /toml, or and sort of environment setup file for cuda 12.1. I have access to a cloud server but unable to change the...