helicone icon indicating copy to clipboard operation
helicone copied to clipboard

[Bug]: Incorrect path_or_model_id: '/usr/src/app/valhalla/prompt_security/./prompt-guard-86m'.

Open namJeongwan opened this issue 10 months ago • 2 comments

What happened?

When deploying Helicone using docker-compose, the helicone-jawn container encounters an error while loading the PromptGuardModel. The error suggests an incorrect path to the model.

Error Message

helicone-jawn                    | OSError: Incorrect path_or_model_id: '/usr/src/app/valhalla/prompt_security/./prompt-guard-86m'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

How I Solved It

# Navigate to the prompt security directory
cd helicone/valhalla/prompt_security/

# Log in to HuggingFace CLI
huggingface-cli login --token=${HF_TOKEN}

# Download the Prompt-Guard model
# Note: You need to have granted access to llama models on HuggingFace before downloading
huggingface-cli download meta-llama/Prompt-Guard-86M --local-dir prompt-guard-86m

# Navigate back to the parent directory
cd ../

# Edit the Dockerfile
vi dockerfile
# Add the following line after the existing COPY command:
# COPY ./valhalla/prompt_security/prompt-guard-86m /usr/src/app/valhalla/prompt_security/prompt-guard-86m

# Navigate to the docker directory
cd ../docker

# Build and run the Docker containers
docker compose build
docker compose up -d

Relevant log output

helicone-jawn                    | The above exception was the direct cause of the following exception:
helicone-jawn                    |
helicone-jawn                    | Traceback (most recent call last):
helicone-jawn                    |   File "/usr/src/app/valhalla/prompt_security/main.py", line 166, in <module>
helicone-jawn                    |     global_model = PromptGuardModel(num_workers=cpu_count // 2).load_model()
helicone-jawn                    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
helicone-jawn                    |   File "/usr/src/app/valhalla/prompt_security/main.py", line 108, in load_model
helicone-jawn                    |     self.tokenizer = AutoTokenizer.from_pretrained(
helicone-jawn                    |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
helicone-jawn                    |   File "/usr/src/app/valhalla/prompt_security/venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 881, in from_pretrained
helicone-jawn                    |     tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
helicone-jawn                    |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
helicone-jawn                    |   File "/usr/src/app/valhalla/prompt_security/venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 713, in get_tokenizer_config
helicone-jawn                    |     resolved_config_file = cached_file(
helicone-jawn                    |                            ^^^^^^^^^^^^
helicone-jawn                    |   File "/usr/src/app/valhalla/prompt_security/venv/lib/python3.11/site-packages/transformers/utils/hub.py", line 408, in cached_file
helicone-jawn                    |     raise EnvironmentError(
helicone-jawn                    | OSError: Incorrect path_or_model_id: '/usr/src/app/valhalla/prompt_security/./prompt-guard-86m'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

Twitter / LinkedIn details

No response

namJeongwan avatar Feb 21 '25 05:02 namJeongwan

same!

sheiy avatar Mar 11 '25 09:03 sheiy

Image

Solution: use localhost:3000 not 127.0.0.1:3000

sheiy avatar Mar 11 '25 09:03 sheiy