PromptEngineer

Results 107 comments of PromptEngineer

> I couldn't make it work on my Linux system b/c of some Python dependency problems. Made this thing run in a Docker. Can you please update the Readme with...

@maxchiron this is a good idea, however, I think it will be better if just like Oobabooga text generation webui, the user has to input a number, rather than the...

@maxchiron have you tested the 3 NousResearch/Nous-Hermes-13b [Recommand] model with localGPT? were you able to run this?

@ashokrs look for the model names that end with -HF. I am trying to move from langchain to llamacpp. That will hopefully give us the ability to run quantized version...

@Devesh-N can you please explain the changes that were made? Thanks

The current code doen't support quantized models but this is coming soon :)

Great idea, Thanks will appreciate that.

@Allaye When we are adding the model choice to CLI, we have the default `model_basename` set to `WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors`. The unquantized models do not have the `model_basename` and I think it...

@Allaye based on the `if` condition in the `load_model` function, irrespective of whether the model is quantized or not, since we will be providing a default `model_basename`, it will always...

@Allaye thanks for the update, I will have a detailed look at it later today and will merge it if I dont' see any further changes that need to be...