localGPT
localGPT copied to clipboard
Error when starting python ingest.py
(localgpt) F:\localGPT>python ingest.py
2024-02-29 23:49:13,666 - INFO - ingest.py:147 - Loading documents from F:\localGPT/SOURCE_DOCUMENTS
Importing: METODIChKA_pechat_04_01.pdf
F:\localGPT/SOURCE_DOCUMENTS\METODIChKA_pechat_04_01.pdf loaded.
F:\localGPT/SOURCE_DOCUMENTS\METODIChKA_pechat_04_01.pdf loading error:
No module named 'torchvision'
2024-02-29 23:49:25,149 - INFO - ingest.py:156 - Loaded 1 documents from F:\localGPT/SOURCE_DOCUMENTS
2024-02-29 23:49:25,150 - INFO - ingest.py:157 - Split into 0 chunks of text
2024-02-29 23:49:26,491 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: cointegrated/rubert-tiny2
2024-02-29 23:49:28,755 - INFO - ingest.py:168 - Loaded embeddings from cointegrated/rubert-tiny2
Batches: 0it [00:00, ?it/s]
Traceback (most recent call last):
File "F:\localGPT\ingest.py", line 182, in <module>
main()
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\Lib\site-packages\click\core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\Lib\site-packages\click\core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\Lib\site-packages\click\core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\Lib\site-packages\click\core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\localGPT\ingest.py", line 170, in main
db = Chroma.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\Lib\site-packages\langchain\vectorstores\chroma.py", line 613, in from_documents
return cls.from_texts(
^^^^^^^^^^^^^^^
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\Lib\site-packages\langchain\vectorstores\chroma.py", line 577, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\Lib\site-packages\langchain\vectorstores\chroma.py", line 236, in add_texts
self._collection.upsert(
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\Lib\site-packages\chromadb\api\models\Collection.py", line 294, in upsert
ids, embeddings, metadatas, documents = self._validate_embedding_set(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\Lib\site-packages\chromadb\api\models\Collection.py", line 342, in _validate_embedding_set
ids = validate_ids(maybe_cast_one_to_many(ids))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\Lib\site-packages\chromadb\api\types.py", line 99, in maybe_cast_one_to_many
if isinstance(target[0], (int, float)):
~~~~~~^^^
IndexError: list index out of range
I use the IlyaGusev/saiga_mistral_7b_guf model
cointegrated/rubert-tiny2 embedding model
We managed to get rid of this error by deleting the virtual environment and reinstalling the entire system on Python 3.10 There is no more error when embedding text. But now there is another error when launching the model itself. Although she hadn't been there before.
(localgpt) F:\localGPT>python run_localGPT.py
Traceback (most recent call last):
File "F:\localGPT\run_localGPT.py", line 24, in <module>
from load_models import (
File "F:\localGPT\load_models.py", line 6, in <module>
from auto_gptq import AutoGPTQForCausalLM
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\auto_gptq\__init__.py", line 4, in <module>
from .utils.peft_utils import get_gptq_peft_model
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\auto_gptq\utils\peft_utils.py", line 9, in <module>
from peft import get_peft_model, PeftConfig, PeftModel, PeftType
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\peft\__init__.py", line 22, in <module>
from .auto import (
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\peft\auto.py", line 32, in <module>
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\peft\mapping.py", line 22, in <module>
from .mixed_model import PeftMixedModel
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\peft\mixed_model.py", line 26, in <module>
from peft.tuners.mixed import COMPATIBLE_TUNER_TYPES
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\peft\tuners\__init__.py", line 21, in <module>
from .lora import LoraConfig, LoraModel, LoftQConfig
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\peft\tuners\lora\__init__.py", line 20, in <module>
from .model import LoraModel
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\peft\tuners\lora\model.py", line 42, in <module>
from .awq import dispatch_awq
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\peft\tuners\lora\awq.py", line 26, in <module>
from awq.modules.linear import WQLinear_GEMM
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\awq\__init__.py", line 2, in <module>
from awq.models.auto import AutoAWQForCausalLM
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\awq\models\__init__.py", line 15, in <module>
from .mixtral import MixtralAWQForCausalLM
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\awq\models\mixtral.py", line 7, in <module>
from awq.modules.fused.moe import FusedSparseMoeBlock
File "C:\Users\Cash\AppData\Local\NVIDIA\MiniConda\envs\localgpt\lib\site-packages\awq\modules\fused\moe.py", line 2, in <module>
import triton
ModuleNotFoundError: No module named 'triton'