LLaVA-NeXT icon indicating copy to clipboard operation
LLaVA-NeXT copied to clipboard

sglang support?

Open figuernd opened this issue 1 year ago • 3 comments

The announcement blog post indicates inference can be done with sglang, but attempting to load the 7b model with the sglang backend:

python -m sglang.launch_server --model-path ~/models/lmms-lab_LLaVA-NeXT-Video-7B-DPO --port 30000

Results in this key error:

/home/user/sglang/venv/lib/python3.11/site-packages/transformers/models/llava/configuration_llava.py:103: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
  warnings.warn(
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
/home/user/sglang/venv/lib/python3.11/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
  warnings.warn(
Rank 0: load weight begin.
/home/user/sglang/venv/lib/python3.11/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
  warnings.warn(
/home/user/sglang/venv/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
/home/user/sglang/venv/lib/python3.11/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  return self.fget.__get__(instance, owner)()
Process Process-1:
router init state: Traceback (most recent call last):
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/manager.py", line 68, in start_router_process
    model_client = ModelRpcClient(server_args, port_args)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_rpc.py", line 619, in __init__
    self.model_server.exposed_init_model(0, server_args, port_args)
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_rpc.py", line 70, in exposed_init_model
    self.model_runner = ModelRunner(
                        ^^^^^^^^^^^^
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_runner.py", line 287, in __init__
    self.load_model()
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_runner.py", line 326, in load_model
    model.load_weights(
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/models/llava.py", line 285, in load_weights
    param = params_dict[name]
            ~~~~~~~~~~~^^^^^^
KeyError: 'model.vision_resampler.mm_projector.0.bias'

figuernd avatar May 11 '24 06:05 figuernd

It needs to be running on a specified version of sglang, you can see at the sglang's repo, we have a WIP PR for it.

Luodian avatar May 11 '24 18:05 Luodian

Their fork of sglang works but the performance is pretty abysmal. I am getting about 1 caption/s on 2x 3090s with LLaMA 3 8B model. We desperately need quantized versions of the models.

The announcement blog post indicates inference can be done with sglang, but attempting to load the 7b model with the sglang backend:

python -m sglang.launch_server --model-path ~/models/lmms-lab_LLaVA-NeXT-Video-7B-DPO --port 30000

Results in this key error:

/home/user/sglang/venv/lib/python3.11/site-packages/transformers/models/llava/configuration_llava.py:103: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
  warnings.warn(
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
/home/user/sglang/venv/lib/python3.11/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
  warnings.warn(
Rank 0: load weight begin.
/home/user/sglang/venv/lib/python3.11/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
  warnings.warn(
/home/user/sglang/venv/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
/home/user/sglang/venv/lib/python3.11/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  return self.fget.__get__(instance, owner)()
Process Process-1:
router init state: Traceback (most recent call last):
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/manager.py", line 68, in start_router_process
    model_client = ModelRpcClient(server_args, port_args)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_rpc.py", line 619, in __init__
    self.model_server.exposed_init_model(0, server_args, port_args)
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_rpc.py", line 70, in exposed_init_model
    self.model_runner = ModelRunner(
                        ^^^^^^^^^^^^
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_runner.py", line 287, in __init__
    self.load_model()
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_runner.py", line 326, in load_model
    model.load_weights(
  File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/models/llava.py", line 285, in load_weights
    param = params_dict[name]
            ~~~~~~~~~~~^^^^^^
KeyError: 'model.vision_resampler.mm_projector.0.bias'

Hi, the video support for sglang is merged in the main branch!

see: https://github.com/sgl-project/sglang/blob/main/examples/usage/llava_video/srt_example_llava_v.sh

ZhangYuanhan-AI avatar May 14 '24 05:05 ZhangYuanhan-AI