vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[New Model]: Cogagent

Open junleiz opened this issue 1 year ago • 2 comments

The model to consider.

https://huggingface.co/THUDM/CogAgent

The closest model vllm already supports.

No response

What's your difficulty of supporting the model you want?

Vision models

junleiz avatar May 03 '24 06:05 junleiz

Need to Wait for / help with #4888 and #4942 before this can be implemented. Maybe even some more stuff.

JBurtn avatar Jun 22 '24 04:06 JBurtn

Need to Wait for / help with #4888 and #4942 before this can be implemented. Maybe even some more stuff.

Quick update:

#4888 is landed, enabling the xFormers backend to support encoder attention, decoder self-attention, and decoder cross-attention. #4837 and #4888 (both of which have been landed) were prerequisites for #4942 . #4942 completes end-to-end support for encoder/decoder models with the xFormers backend & also introduces the BART model into vLLM. #4942 is still WIP but hoping to complete it soon.

@robertgshaw2-neuralmagic

afeldman-nm avatar Jul 08 '24 21:07 afeldman-nm