localGPT
localGPT copied to clipboard
Now u can choose model by number, or by its full-name.
python run_localGPT.py
I choose:
(u input: ) 3
(loading NousResearch/Nous-Hermes-13b)
or
(u input: ) NousResearch/Nous-Hermes-13b
(loading NousResearch/Nous-Hermes-13b)
1 TheBloke/vicuna-7B-1.1-HF [Fastest] 2 TheBloke/Wizard-Vicuna-13B-Uncensored-HF 3 NousResearch/Nous-Hermes-13b [Recommand] 4 TheBloke/guanaco-65B-HF [Biggest]
@PromtEngineer I don't know why the original pr #109 was closed when I want to follow your main branch before modifing. So I have to create this new pr. This should work as u commented generously.
@maxchiron have you tested the 3 NousResearch/Nous-Hermes-13b [Recommand] model with localGPT? were you able to run this?
@maxchiron have you tested the 3 NousResearch/Nous-Hermes-13b [Recommand] model with localGPT? were you able to run this?
@PromtEngineer Yes it could be run by your localGPT. In fact, not just models end with -HF could, all pytorch-xxx-xxx.bin models could even if not end with -HF, although some prompt formats may not all the same, but yes, they work in localGPT. And suprisingly, Nous-Hermes-13b got the best efficiency in me here.
when I said efficiency: Faster than guanaco-65B, Equal to WizardVicu13B in speed; but the best and fullfilled responses in clinical medicine knowledge.(Fed with related books by your ingest.py of course.) when I said me here: Linux, 4 x 3090 24G
THKS 4 your distrubution, localGPT is really good. Hoping it could working with text-generation-webui soon.