DB-GPT icon indicating copy to clipboard operation
DB-GPT copied to clipboard

[Feature]How to load multiple models at the same time on startup? Does the model now support persistence?

Open LZC6244 opened this issue 10 months ago • 4 comments

Search before asking

  • [X] I had searched in the issues and found no similar feature requirement.

Description

No response

Documentation Links

No response

Are you willing to submit PR?

  • [X] Yes I am willing to submit a PR!

LZC6244 avatar Apr 11 '24 12:04 LZC6244

  1. you can update local manager mode in manager.py to support multi llm start.
    if run_locally:
        # TODO start ModelController
        worker_params.standalone = True
        worker_params.register = True
        worker_params.port = local_port
        logger.info(f"Worker params: {worker_params}")
        _setup_fastapi(worker_params, app, ignore_exception=True)
        _start_local_worker(worker_manager, worker_params)
        worker_manager.after_start(start_listener)
        _start_local_embedding_worker(
            worker_manager, embedding_model_name, embedding_model_path
        )
  1. llm persistence will support soon.

Aries-ckt avatar Apr 12 '24 01:04 Aries-ckt

  1. you can update local manager mode in manager.py to support multi llm start.
    if run_locally:
        # TODO start ModelController
        worker_params.standalone = True
        worker_params.register = True
        worker_params.port = local_port
        logger.info(f"Worker params: {worker_params}")
        _setup_fastapi(worker_params, app, ignore_exception=True)
        _start_local_worker(worker_manager, worker_params)
        worker_manager.after_start(start_listener)
        _start_local_embedding_worker(
            worker_manager, embedding_model_name, embedding_model_path
        )
  1. llm persistence will support soon.

How should I configure different proxyllm? Different proxyllm api-keys and urls are different. It seems that the PROXY_API_KEY and PROXY_SERVER_URL in the environment variables are used.

    xx_model_worker_params = ModelWorkerParameters(
        model_name='test_llm',
        model_path='chatgpt_proxyllm'
    )
    _start_local_worker(worker_manager, xx_model_worker_params)

LZC6244 avatar Apr 12 '24 02:04 LZC6244

Could you explain this implementation process in detail :) I add "_start_local_worker" after worker_manager.after_start(start_listener) image However, it setups 2 same model path LLM and named is chatglm3-6b-128k ;< image

paul-yangmy avatar Apr 16 '24 07:04 paul-yangmy

Is there any date for model persistence?

LZC6244 avatar Apr 28 '24 07:04 LZC6244

This issue has been marked as stale, because it has been over 30 days without any activity.

github-actions[bot] avatar May 28 '24 21:05 github-actions[bot]

Is there any date for model persistence?

paul-yangmy avatar May 30 '24 07:05 paul-yangmy

This issue has been marked as stale, because it has been over 30 days without any activity.

github-actions[bot] avatar Jul 15 '24 21:07 github-actions[bot]

This issue bas been closed, because it has been marked as stale and there has been no activity for over 7 days.

github-actions[bot] avatar Jul 23 '24 21:07 github-actions[bot]