private-gpt icon indicating copy to clipboard operation
private-gpt copied to clipboard

how can i run it?

Open tliinh opened this issue 1 year ago • 15 comments

tliinh avatar May 13 '23 03:05 tliinh

What's the issue?

Abhinavcode13 avatar May 13 '23 03:05 Abhinavcode13

Traceback (most recent call last): File "G:\privateGPT-main\ingest.py", line 7, in from constants import CHROMA_SETTINGS File "G:\privateGPT-main\constants.py", line 11, in CHROMA_SETTINGS = Settings( ^^^^^^^^^ File "pydantic\env_settings.py", line 39, in pydantic.env_settings.BaseSettings.init File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for Settings persist_directory none is not an allowed value (type=type_error.none.not_allowed)

rabotone avatar May 13 '23 04:05 rabotone

Rename example.env to .env and edit the variables appropriately.

zuomirror avatar May 13 '23 04:05 zuomirror

Traceback (most recent call last): File "C:\Program Files\Python311\Lib\site-packages\langchain\embeddings\llamacpp.py", line 78, in validate_environment values["client"] = Llama( ^^^^^^ File "C:\Program Files\Python311\Lib\site-packages\llama_cpp\llama.py", line 153, in init raise ValueError(f"Model path does not exist: {model_path}") ValueError: Model path does not exist: models/ggml-model-q4_0.bin

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "G:\privateGPT-main\ingest.py", line 35, in main() File "G:\privateGPT-main\ingest.py", line 28, in main llama = LlamaCppEmbeddings(model_path=llama_embeddings_model, n_ctx=model_n_ctx) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pydantic\main.py", line 339, in pydantic.main.BaseModel.init File "pydantic\main.py", line 1102, in pydantic.main.validate_model File "C:\Program Files\Python311\Lib\site-packages\langchain\embeddings\llamacpp.py", line 98, in validate_environment raise NameError(f"Could not load Llama model from path: {model_path}") NameError: Could not load Llama model from path: models/ggml-model-q4_0.bin Exception ignored in: <function Llama.del at 0x000002AE4688C040> Traceback (most recent call last): File "C:\Program Files\Python311\Lib\site-packages\llama_cpp\llama.py", line 978, in del if self.ctx is not None: ^^^^^^^^ AttributeError: 'Llama' object has no attribute 'ctx'

rabotone avatar May 13 '23 04:05 rabotone

how to edit the variables appropriately.

rabotone avatar May 13 '23 04:05 rabotone

@zuomirror

rabotone avatar May 13 '23 04:05 rabotone

@rabotone image PERSIST_DIRECTORY=db LLAMA_EMBEDDINGS_MODEL=models/ggml-model-q4_0.bin MODEL_TYPE=GPT4All MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin MODEL_N_CTX=1000

zuomirror avatar May 13 '23 05:05 zuomirror

@rabotone 图像 PERSIST_DIRECTORY=db LLAMA_EMBEDDINGS_MODEL=models/ggml-model-q4_0.bin MODEL_TYPE=GPT4All MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin MODEL_N_CTX=1000

Can you only use linux? Can you use windows

rabotone avatar May 13 '23 06:05 rabotone

@zuomirror

rabotone avatar May 13 '23 06:05 rabotone

You need to provide the absolute path to your LLAMA model in the .env file. See #68 for details.

andreakiro avatar May 13 '23 08:05 andreakiro

Isn't It Already Explained?

j4acks0n avatar May 13 '23 18:05 j4acks0n

I am getting this error, although I have set up all as zuomirror did.

image

IamHellToday avatar May 13 '23 18:05 IamHellToday

Rename example.env to .env and edit the variables appropriately.

I Think Conda Is Better Right? 🤔

j4acks0n avatar May 13 '23 18:05 j4acks0n

I am getting this error, although I have set up all as zuomirror did.

image

Make Sure Your Spell Is Correct,Haven't You? Or Reinstall It

j4acks0n avatar May 13 '23 18:05 j4acks0n

All solved now - guys keep your python updated, lol.

IamHellToday avatar May 13 '23 20:05 IamHellToday