DB-GPT
DB-GPT copied to clipboard
[Bug] [model] Starting dbgpt, start failed, 'LlamaRotaryEmbedding' object has no attribute 'cos_cached'
Search before asking
- [X] I had searched in the issues and found no similar issues.
Operating system information
Linux
Python version information
=3.11
DB-GPT version
latest release
Related scenes
- [X] Chat Data
- [X] Chat Excel
- [X] Chat DB
- [X] Chat Knowledge
- [X] Model Management
- [X] Dashboard
- [X] Plugins
Installation Information
-
[ ] AutoDL Image
-
[ ] Other
Device information
Models information
LLM: vicuna-7b-v1.5
What happened
Model Unified Deployment Mode!
Warning: Loading base MPT model with zero_shot
conversation configuration. If this is not desired, inspect model configurations and names.
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]/usr/local/lib/python3.10/dist-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
Loading checkpoint shards: 100%|██████████| 2/2 [00:05<00:00, 2.52s/it]
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:410: UserWarning: do_sample
is set to False
. However, temperature
is set to 0.9
-- this flag is only used in sample-based generation modes. You should set do_sample=True
or unset temperature
. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:415: UserWarning: do_sample
is set to False
. However, top_p
is set to 0.6
-- this flag is only used in sample-based generation modes. You should set do_sample=True
or unset top_p
. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:410: UserWarning: do_sample
is set to False
. However, temperature
is set to 0.9
-- this flag is only used in sample-based generation modes. You should set do_sample=True
or unset temperature
.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:415: UserWarning: do_sample
is set to False
. However, top_p
is set to 0.6
-- this flag is only used in sample-based generation modes. You should set do_sample=True
or unset top_p
.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:4193: FutureWarning: _is_quantized_training_enabled
is going to be deprecated in transformers 4.39.0. Please use model.hf_quantizer.is_trainable
instead
warnings.warn(
The cos_cached attribute will be removed in 4.40. Bear in mind that its contents changed in v4.38. Use the forward method of RoPE from now on instead.
2024-02-29 09:22:36 d0c761fd01bc dbgpt.model.cluster.worker.manager[1] ERROR Error starting worker manager: model vicuna-7b-v1.5@huggingface(172.17.0.7:5000) start failed, 'LlamaRotaryEmbedding' object has no attribute 'cos_cached'
2024-02-29 09:22:36 d0c761fd01bc asyncio[1] ERROR Task exception was never retrieved
future: <Task finished name='Task-4' coro=<_setup_fastapi.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "uvloop/loop.pyx", line 474, in uvloop.loop.Loop._on_idle
File "uvloop/cbhandles.pyx", line 83, in uvloop.loop.Handle._run
File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run
File "/app/dbgpt/model/cluster/worker/manager.py", line 841, in start_worker_manager
sys.exit(1)
SystemExit: 1
2024-02-29 09:22:36 d0c761fd01bc dbgpt.model.cluster.worker.manager[1] INFO Stop all workers
2024-02-29 09:22:36 d0c761fd01bc dbgpt.model.cluster.worker.manager[1] INFO Apply req: None, apply_func: <function LocalWorkerManager._stop_all_worker.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "uvloop/loop.pyx", line 1511, in uvloop.loop.Loop.run_until_complete File "uvloop/loop.pyx", line 1504, in uvloop.loop.Loop.run_until_complete File "uvloop/loop.pyx", line 1377, in uvloop.loop.Loop.run_forever File "uvloop/loop.pyx", line 555, in uvloop.loop.Loop._run File "uvloop/loop.pyx", line 474, in uvloop.loop.Loop._on_idle File "uvloop/cbhandles.pyx", line 83, in uvloop.loop.Handle._run File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run File "/app/dbgpt/model/cluster/worker/manager.py", line 841, in start_worker_manager sys.exit(1) SystemExit: 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 686, in lifespan await receive() File "/usr/local/lib/python3.10/dist-packages/uvicorn/lifespan/on.py", line 137, in receive return await self.receive_queue.get() File "/usr/lib/python3.10/asyncio/queues.py", line 159, in get await getter asyncio.exceptions.CancelledError
What you expected to happen
docker container start failed with logs
How to reproduce
use docker container
Additional context
No response
Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
@fangyinc have you ever encountered this problem before?
I also encountered the same problem when i want to deploy a model named sqlcoder-7b-2, is there any progress so far?
I also encountered the same problem when i want to deploy a model named sqlcoder-7b-2, is there any progress so far?
I manually installed a lower version of the transformers library, which resolved the error. You can do this by running: pip install transformers==4.37
.
I also encountered the same problem when i want to deploy a model named sqlcoder-7b-2, is there any progress so far?
I manually installed a lower version of the transformers library, which resolved the error. You can do this by running:
pip install transformers==4.37
. May I ask if DB-GPT.0.5.0 version can support the sqlcoder-7b-2 model, right? What is the effect of text-2-SQL? @zyan-repository
This issue has been marked as stale
, because it has been over 30 days without any activity.
This issue bas been closed, because it has been marked as stale
and there has been no activity for over 7 days.