Chad Doebelin
Chad Doebelin
You are correct, the negative prompt does work, however, the expectation would be that AND NOT would also function. Consider the "read generation parameters from prompt button" parses things from...
I disagree with auto's assertion that it "seems" redundant because we have weights. Yes, lets use _weights_ instead of natural language, because it... ***checks notes*** _adds to the complexity_? Leaving...
If you believe that negative prompts are the solution then convert the AND NOTs in the prompts to negative when you use the "read generation parameters from prompt button" is...

Got 7B models working on my Tesla M40 w/ 24GB ram
I was able to get 3B parameter to work on CPU with 16GB of ram.
I had to disable torch.backends.cudnn and convert to float. check out my repo https://github.com/astrobleem/Simple-StableLM-Chat tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForCausalLM.from_pretrained(MODEL_NAME) model.float().to(device) torch.backends.cudnn.enabled = False
