linuxmagic-mp
linuxmagic-mp
Author should comment on whether this ticket can be reproduced on a more recent build.
When it comes down to a 'success' story, your request might need to be a little narrower. Which model are you looking for a success story on? When it comes...
If this project wants to remain active, it will have to address the problem(s) surrounding 'pip' install. Note, running Python 3.10, and it should NOT be looking for Collecting tensorflow-gpu>=2.3.1...
Would it not be better to update this project? Should it not be able to support newer Python and TensorFlow?
Okay, the problem here appears to lie in the way AutoModelForCausalLM is called.. When using a ctransformers method, we should be calling it with a model_file argument. See https://github.com/marella/ctransformers/issues/60. Otherwise,...
And to continue, I have the actual filename in the reference file ``` cat ~/lollms_data/models/llama_cpp_official/falcon-40b-ggml.reference ~/models/WizardLM-Uncensored-Falcon-40b/ggml-model-falcon-40b-wizardlm-qt_k5.bin ``` Howeer, it doesn't see that it properly sees that as a file. Instead,...
Well, guess I am still talking to myself.. ;) Went up and down this code, and there doesn't seem to be a 'recommended' way to handle setting the actual filename...
Can we get a clearer status update? Your readme isn't clear whether everything is good with the 176B quantize, I am still having a problem with it on bloom.cpp, and...
Getting lost in this thread, just converted the 176B model into GGML, fp16, and now looking at using bloom.cpp, but noticed that @barsuma Readme appears to reflect there there are...