lmdeploy icon indicating copy to clipboard operation
lmdeploy copied to clipboard

[Feature]: Support cogvlm-chat

Open RunningLeon opened this issue 1 year ago • 0 comments

Motivation

Support cogvlm-chat-hf for pytorch engine

Usage:

[!WARNING] CogVLM-Chat-hf uses 'lmsys/vicuna-7b-v1.5' as tokenizer, you need to copy the tokenizer model and configs into CogVLM model directory.

from lmdeploy import pipeline
from lmdeploy.vl import load_image

model_path = './models--THUDM--cogvlm-chat-hf'

pipe = pipeline(model_path)

image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
response = pipe(('describe this image', image))
print(response)

Modification

TODOs

  • [x] Suppot tp
  • [x] Support ModelInputs.split with vision embeddings
  • [x] Resolve conflicts with main branch
  • [x] Support loading only llm part for vlm in pytorch engine
  • [x] documents
  • [ ] Update after PR #1553 and #1591

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
  3. If the modification has a dependency on downstream projects of a newer version, this PR should be tested with all supported versions of downstream projects.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

RunningLeon avatar Apr 26 '24 03:04 RunningLeon