private-gpt
private-gpt copied to clipboard
OSError: It looks like the config file at 'models/ggml-model-q4_0.bin' is not a valid JSON file.
Hello All ,
when i run python3 ingest.py file on my MAC i am seeing below error. I saw one of the solution (https://github.com/imartinez/privateGPT/issues/564) where i updated with 2 lines of code post which as well i am seeing the same error. can anyone help guide me on this ?
Error - File "/usr/local/lib/python3.11/site-packages/transformers/configuration_utils.py", line 662, in _get_config_dict raise EnvironmentError OSError: It looks like the config file at 'models/ggml-model-q4_0.bin' is not a valid JSON file.
I have the same issue, I tried the 2 lines solution as well, but it did not help. Did someone find another solution for that problem?
I have similar problem in Ubuntu
python3 ingest.py No sentence-transformers model found with name models/ggml-gpt4all-j-v1.3-groovy.bin. Creating a new one with MEAN pooling. Traceback (most recent call last): File "/home/dell/.local/lib/python3.10/site-packages/transformers/configuration_utils.py", line 659, in _get_config_dict config_dict = cls._dict_from_json_file(resolved_config_file) File "/home/dell/.local/lib/python3.10/site-packages/transformers/configuration_utils.py", line 750, in _dict_from_json_file text = reader.read() File "/usr/lib/python3.10/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe0 in position 4: invalid continuation byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dell/dell/openAI/privateGPT/ingest.py", line 171, in
I partly solved the problem. I said partly because I had to change the embeddings_model_name from ggml-model-q4_0.bin to all-MiniLM-L6-v2. If you can switch to this one too, it should work with the following .env file:
PERSIST_DIRECTORY=db MODEL_TYPE=GPT4All MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2 MODEL_N_CTX=1000 MODEL_N_BATCH=8 TARGET_SOURCE_CHUNKS=4
You do not have to download all-MiniLM-L6-v2, somehow it works without.
I've tried all the suggestions. I'm on MAC OS. This is my .env:
PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=/Users/myusername/devwrk/python/privateGPT/models/ggml-gpt4all-j-v1.3-groovy.bin
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2
MODEL_N_CTX=1000
MODEL_N_BATCH=8
TARGET_SOURCE_CHUNKS=4
It seemed to download the all-MiniLM-L6-v2
but now its saying:
python privateGPT.py
File "/Users/myusername/devwrk/python/privateGPT/privateGPT.py", line 34
match model_type:
^
SyntaxError: invalid syntax
PrivateGPT.py:
I'll try get the debugger working as I don't do Python stuff normally.
Update. I'm not getting the error now. Its running. Buuut very slow - need a decent GPU! I just commented out the case statement as per: