DB-GPT
DB-GPT copied to clipboard
[Feature]How to load multiple models at the same time on startup? Does the model now support persistence?
Search before asking
- [X] I had searched in the issues and found no similar feature requirement.
Description
No response
Documentation Links
No response
Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
- you can update local manager mode in manager.py to support multi llm start.
if run_locally:
# TODO start ModelController
worker_params.standalone = True
worker_params.register = True
worker_params.port = local_port
logger.info(f"Worker params: {worker_params}")
_setup_fastapi(worker_params, app, ignore_exception=True)
_start_local_worker(worker_manager, worker_params)
worker_manager.after_start(start_listener)
_start_local_embedding_worker(
worker_manager, embedding_model_name, embedding_model_path
)
- llm persistence will support soon.
- you can update local manager mode in manager.py to support multi llm start.
if run_locally: # TODO start ModelController worker_params.standalone = True worker_params.register = True worker_params.port = local_port logger.info(f"Worker params: {worker_params}") _setup_fastapi(worker_params, app, ignore_exception=True) _start_local_worker(worker_manager, worker_params) worker_manager.after_start(start_listener) _start_local_embedding_worker( worker_manager, embedding_model_name, embedding_model_path )
- llm persistence will support soon.
How should I configure different proxyllm? Different proxyllm api-keys and urls are different. It seems that the PROXY_API_KEY and PROXY_SERVER_URL in the environment variables are used.
xx_model_worker_params = ModelWorkerParameters(
model_name='test_llm',
model_path='chatgpt_proxyllm'
)
_start_local_worker(worker_manager, xx_model_worker_params)
Could you explain this implementation process in detail :) I add "_start_local_worker" after worker_manager.after_start(start_listener)
However, it setups 2 same model path LLM and named is chatglm3-6b-128k ;<
Is there any date for model persistence?
This issue has been marked as stale
, because it has been over 30 days without any activity.
Is there any date for model persistence?
This issue has been marked as stale
, because it has been over 30 days without any activity.
This issue bas been closed, because it has been marked as stale
and there has been no activity for over 7 days.