private-gpt
private-gpt copied to clipboard
Optimizing the Dockerfile and/or the documentation on how to run with the container
I created the image using dockerfile.local running docker-compose.file. But, when I run the image, it cannot run, so I run it in interactive mode to view the problem. It cannot be initialized.
It would be better to download the model and dependencies automatically and/or the documentation on how to run with the container.
Run:
docker run -it privategpt-private-gpt:latest bash
Output:
16:03:51.306 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default']
There was a problem when trying to write in your cache folder (/nonexistent/.cache/huggingface/hub). You should set the environment variable TRANSFORMERS_CACHE to a writable directory.
16:04:02.044 [WARNING ] matplotlib - Matplotlib created a temporary cache directory at /tmp/matplotlib-vs3jk8yh because the default path (/nonexistent/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
16:04:03.289 [INFO ] matplotlib.font_manager - generated new fontManager
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.46k/1.46k [00:00<00:00, 8.99MB/s]
tokenizer.model: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 493k/493k [00:00<00:00, 11.4MB/s]
tokenizer.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.80M/1.80M [00:00<00:00, 6.04MB/s]
special_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 72.0/72.0 [00:00<00:00, 267kB/s]
16:04:09.004 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=local
Traceback (most recent call last):
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
return self._context[key]
~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.ui.ui.PrivateGptUi'>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
return self._context[key]
~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.server.ingest.ingest_service.IngestService'>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
return self._context[key]
~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.components.llm.llm_component.LLMComponent'>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/worker/app/private_gpt/__main__.py", line 5, in <module>
from private_gpt.main import app
File "/home/worker/app/private_gpt/main.py", line 11, in <module>
app = create_app(global_injector)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/private_gpt/launcher.py", line 50, in create_app
ui = root_injector.get(PrivateGptUi)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
provider_instance = scope_instance.get(interface, binding.provider)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
instance = self._get_instance(key, provider, self.injector)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
return provider.get(injector)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
return injector.create_object(self._cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1031, in call_with_injection
dependencies = self.args_to_inject(
^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1079, in args_to_inject
instance: Any = self.get(interface)
^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
provider_instance = scope_instance.get(interface, binding.provider)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
instance = self._get_instance(key, provider, self.injector)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
return provider.get(injector)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
return injector.create_object(self._cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1031, in call_with_injection
dependencies = self.args_to_inject(
^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1079, in args_to_inject
instance: Any = self.get(interface)
^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
provider_instance = scope_instance.get(interface, binding.provider)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
instance = self._get_instance(key, provider, self.injector)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
return provider.get(injector)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
return injector.create_object(self._cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1040, in call_with_injection
return callable(*full_args, **dependencies)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/worker/app/private_gpt/components/llm/llm_component.py", line 38, in __init__
self.llm = LlamaCPP(
^^^^^^^^^
File "/home/worker/app/.venv/lib/python3.11/site-packages/llama_index/llms/llama_cpp.py", line 119, in __init__
raise ValueError(
ValueError: Provided model path does not exist. Please check the path or provide a model_url to download.
The same issue :/
Hi
Edit 12 Feb 2024: These steps are suboptimal, scroll down in this conversation for the ideal way.
I ran into the same issue at first. Now it seems fixed for me after executing following steps:
- Download a model from huggingface.co
- Place this model in the "models" folder and made sure to create a volume for this models folder:
volumes:
- ./local_data/:/home/worker/app/local_data
- ./models/:/home/worker/app/models
- adjust "settings-docker.yaml" with the filename of your model:
local:
llm_hf_repo_id: ${PGPT_HF_REPO_ID:TheBloke/Mistral-7B-Instruct-v0.1-GGUF}
llm_hf_model_file: ${PGPT_HF_MODEL_FILE:mistral-7b-instruct-v0.2.Q5_K_M.gguf} # The actual model you downloaded
embedding_hf_model_name: ${PGPT_EMBEDDING_HF_MODEL_NAME:BAAI/bge-small-en-v1.5}
- make sure to use settings-docker.yaml, by setting the env variable PGPT_PROFILES to "docker":
environment:
PORT: 8080
PGPT_PROFILES: docker
PGPT_MODE: local
Hope this helps, if it does, make sure to give a 👍
Stale issue
You should run 'poetry run python scripts/setup' before make run
Hey @Robinsane, trying your suggestion didn't fix it for building the image for me. Now im trying to compose up, but get told
invalid interpolation format for services.private-gpt.local.llm_hf_repo_id.
You may need to escape any $ with another $.
Does your much larger brain hold any insights about this?
Hey @Robinsane, trying your suggestion didn't fix it for building the image for me. Now im trying to compose up, but get told
invalid interpolation format for services.private-gpt.local.llm_hf_repo_id. You may need to escape any $ with another $.
Does your much larger brain hold any insights about this?
Brain not that big, no clue about your problem. I can however say that the steps I described above are suboptimal. The ideal way to do it is described by Imartez on top of following PR: https://github.com/imartinez/privateGPT/pull/1445
@Robinsane thanks lots for that pointer! While i struggled to get it running as imartez described, i changed the docker compose run --rm --entrypoint="bash -c '[ -f scripts/setup ] && scripts/setup'" private-gpt
to docker compose run --rm --entrypoint="/usr/bin/env python3 scripts/setup" private-gpt
as i got a permission error when trying to use the original. Seems to have worked, as its downloading the models right now.
Update: Well, i'll be damned, it worked. pretty well at that. even changing the models works! Now i just need to figure out how to get it to use the gpu...