private-gpt icon indicating copy to clipboard operation
private-gpt copied to clipboard

could not broadcast input array from shape (768,) into shape (384,)

Open lingfan opened this issue 11 months ago • 1 comments

(venv1) d:\ai\privateGPT>make run poetry run python -m private_gpt Warning: Found deprecated priority 'default' for source 'mirrors' in pyproject.toml. You can achieve the same effect by changing the priority to 'primary' and putting the source first. 23:52:07.515 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'ollama'] 23:52:19.360 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama 23:52:20.761 [INFO ] private_gpt.components.embedding.embedding_component - Initializing the embedding model in mode=ollama 23:52:20.772 [INFO ] llama_index.core.indices.loading - Loading all indices. 23:52:21.056 [INFO ] private_gpt.ui.ui - Mounting the gradio UI, at path=/ 23:52:21.148 [INFO ] uvicorn.error - Started server process [2440] 23:52:21.148 [INFO ] uvicorn.error - Waiting for application startup. 23:52:21.150 [INFO ] uvicorn.error - Application startup complete. 23:52:21.151 [INFO ] uvicorn.error - Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit) 23:52:34.594 [INFO ] uvicorn.access - 192.168.1.108:4044 - "POST /upload HTTP/1.0" 200 23:52:34.604 [INFO ] uvicorn.access - 192.168.1.108:4046 - "POST /queue/join HTTP/1.0" 200 23:52:34.617 [INFO ] uvicorn.access - 192.168.1.108:4048 - "GET /queue/data?session_hash=vl6m7smk2oo HTTP/1.0" 200 23:52:34.680 [INFO ] private_gpt.server.ingest.ingest_service - Ingesting file_names=['test.txt'] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 19.16it/s] Generating embeddings: 100%|█████████████████████████████████████████████████████████████| 4/4 [00:19<00:00, 4.86s/it] Generating embeddings: 0it [00:00, ?it/s] Traceback (most recent call last): File "d:\miniconda3\envs\venv1\Lib\site-packages\gradio\queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\miniconda3\envs\venv1\Lib\site-packages\gradio\route_utils.py", line 235, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\miniconda3\envs\venv1\Lib\site-packages\gradio\blocks.py", line 1627, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\miniconda3\envs\venv1\Lib\site-packages\gradio\blocks.py", line 1173, in call_function prediction = await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\miniconda3\envs\venv1\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\miniconda3\envs\venv1\Lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "d:\miniconda3\envs\venv1\Lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\miniconda3\envs\venv1\Lib\site-packages\gradio\utils.py", line 690, in wrapper response = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "d:\ai\privateGPT\private_gpt\ui\ui.py", line 252, in _upload_file self._ingest_service.bulk_ingest([(str(path.name), path) for path in paths]) File "d:\ai\privateGPT\private_gpt\server\ingest\ingest_service.py", line 84, in bulk_ingest documents = self.ingest_component.bulk_ingest(files) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\ai\privateGPT\private_gpt\components\ingest\ingest_component.py", line 133, in bulk_ingest saved_documents.extend(self._save_docs(documents)) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\ai\privateGPT\private_gpt\components\ingest\ingest_component.py", line 140, in _save_docs self._index.insert(document, show_progress=True) File "d:\miniconda3\envs\venv1\Lib\site-packages\llama_index\core\indices\base.py", line 231, in insert self.insert_nodes(nodes, **insert_kwargs) File "d:\miniconda3\envs\venv1\Lib\site-packages\llama_index\core\indices\vector_store\base.py", line 320, in insert_nodes self._insert(nodes, **insert_kwargs) File "d:\miniconda3\envs\venv1\Lib\site-packages\llama_index\core\indices\vector_store\base.py", line 311, in _insert self._add_nodes_to_index(self._index_struct, nodes, **insert_kwargs) File "d:\miniconda3\envs\venv1\Lib\site-packages\llama_index\core\indices\vector_store\base.py", line 233, in _add_nodes_to_index new_ids = self._vector_store.add(nodes_batch, **insert_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\miniconda3\envs\venv1\Lib\site-packages\llama_index\vector_stores\qdrant\base.py", line 256, in add self._client.upload_points( File "d:\miniconda3\envs\venv1\Lib\site-packages\qdrant_client\qdrant_client.py", line 1872, in upload_points return self._client.upload_points( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\miniconda3\envs\venv1\Lib\site-packages\qdrant_client\local\qdrant_local.py", line 698, in upload_points self._upload_points(collection_name, points) File "d:\miniconda3\envs\venv1\Lib\site-packages\qdrant_client\local\qdrant_local.py", line 712, in _upload_points collection.upsert( File "d:\miniconda3\envs\venv1\Lib\site-packages\qdrant_client\local\local_collection.py", line 1213, in upsert self._upsert_point(point) File "d:\miniconda3\envs\venv1\Lib\site-packages\qdrant_client\local\local_collection.py", line 1205, in _upsert_point self._add_point(point) File "d:\miniconda3\envs\venv1\Lib\site-packages\qdrant_client\local\local_collection.py", line 1147, in _add_point named_vectors[idx] = vector_np ~~~~~~~~~~~~~^^^^^ ValueError: could not broadcast input array from shape (768,) into shape (384,)

lingfan avatar Mar 18 '24 15:03 lingfan