private-gpt icon indicating copy to clipboard operation
private-gpt copied to clipboard

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. weaviate-client 3.19.2 requires requests<2.29.0,>=2.28.0, but you have requests 2.31.0 which is incompatible.

Open torpac opened this issue 2 years ago • 3 comments

Im using w11 pro, i cant seem to solve this.

torpac avatar May 31 '23 14:05 torpac

Using embedded DuckDB with persistence: data will be stored in: db llama.cpp: loading model from models/ggml-vic13b-uncensored-q4_0.bin error loading model: unknown (magic, version) combination: 67676a74, 00000003; is this really a GGML file? llama_init_from_file: failed to load model Traceback (most recent call last): File "C:\TCHT\privateGPT\privateGPT.py", line 76, in main() File "C:\TCHT\privateGPT\privateGPT.py", line 34, in main llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, callbacks=callbacks, verbose=False) File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp root Could not load Llama model from path: models/ggml-vic13b-uncensored-q4_0.bin. Received error (type=value_error)

it is giving another error now..

torpac avatar Jun 01 '23 14:06 torpac

You arr using an older model. You have to use a GGML v3 model.

maozdemir avatar Jun 04 '23 21:06 maozdemir

I am having a similar (if not exact) issue that @Torpac noted with a brand-new install of the TroubleChut PrivateGPT One-Line launcher.

I chose the Vicuna 13B Uncensored during my installation.

Launching privateGPT... Using embedded DuckDB with persistence: data will be stored in: db llama.cpp: loading model from models/ggml-vic13b-uncensored-q4_0.bin error loading model: unknown (magic, version) combination: 67676a74, 00000003; is this really a GGML file? llama_init_from_file: failed to load model Traceback (most recent call last): File "C:\TCHT\privateGPT\privateGPT.py", line 82, in main() File "C:\TCHT\privateGPT\privateGPT.py", line 36, in main llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, n_batch=model_n_batch, callbacks=callbacks, verbose=False) File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp root Could not load Llama model from path: models/ggml-vic13b-uncensored-q4_0.bin. Received error (type=value_error)


TroubleChute One-Line installer:

Python exited with an error code. Did you see something about 'ModuleNotFoundError' above? (y/n):

System info if needed:

  • HP ZZ820 Workstation
  • 2x Intel Xeon E5-2690 @ 3 GHz
  • 128 GB RAM
  • Windows 10 Pro 22H2
  • 2x Nvidia Quardro K5200 in SLI mode
  • Installed on Local Disk C (Micron M5550 SSD)

You will need to explain the fix to me as if my knowledge of programming was from the Quick Basic and COBOL days (please). I am a Windows System admin and I know just enough to know I need assistance. :)

Thanks in advance.

Matt-Miracle avatar Jun 15 '23 19:06 Matt-Miracle