VILA icon indicating copy to clipboard operation
VILA copied to clipboard

How to run longvila large context, sequence parallel inference?

Open zadeismael opened this issue 1 year ago • 20 comments

There are multiple mentions of a multi modal sequence parallel system for inference which can be seamlessly integrated with HF transformers. However, I am not able to follow this through the codebase OR see this exhibited in any of the scripts / examples.

Can the team please:

  1. Point me to the code that enables long context, sequence parallel inference for generation?
  2. Provide an examples script to run this inference (preferably the same script used for the eval metrics mentioned in the paper?

Mentions of inference in the longvila paper: Section 1: For inference, the memory usage of KV cache will also be a bottleneck when the sequence length is very long, we thus implement the inference mode of our MM-SP to support long context multi-modal language deployment.

Section 3.3 Thus, we implement sequence parallelism for VLMs distributed inference. Compared to the training mode, the system needs to additionally maintain tensors (e.g. input tokens and position encodings) that are progressively changing during the decoding phrase (Yu et al., 2022). In addition, the system needs to detect signals from the machine that holds the last token and accordingly terminate the distributed process.

Section 5(.1)

zadeismael avatar Aug 27 '24 18:08 zadeismael

@DachengLi1 @yukang2017

Lyken17 avatar Aug 28 '24 04:08 Lyken17

Hi @zadeismael Thank you for the notice! This is an active PR that will be merged very soon (within days).

DachengLi1 avatar Aug 28 '24 04:08 DachengLi1

Hello, I am also very interested in sequence parallel inference. May I ask when you plan to open-source the code for sequence parallel inference?

hb-jw avatar Sep 02 '24 10:09 hb-jw

@hb-jw Thank you! We are undergoing the final merging check in our internal codebase for this PR, and will be ready very soon (If everything goes well, it should be mid this week).

DachengLi1 avatar Sep 02 '24 17:09 DachengLi1

Hello,today is Friday,I want to ask if everything goes well?

hb-jw avatar Sep 06 '24 02:09 hb-jw

@hb-jw Hi there, sorry for the delay. We have worked out the version update. We are working on integrating with the vision needle-in-a-haystack before OSS this PR.

DachengLi1 avatar Sep 09 '24 06:09 DachengLi1

@DachengLi1 Thanks for the update - can you let us know a new expected date?

zade-twelvelabs avatar Sep 09 '24 08:09 zade-twelvelabs

@zade-twelvelabs I will allocate more bandwidth to the task, and hopefully finish it by this Thursday. Thanks for your patience, and apologize the delay!

DachengLi1 avatar Sep 10 '24 08:09 DachengLi1

@zade-twelvelabs I will allocate more bandwidth to the task, and hopefully finish it by this Thursday. Thanks for your patience, and apologize the delay!

OK!Thank you for your effort and open-source, I like the project about sequence parallel very much and check if it is open-source every day,please reply me when it open-sourced! Thank you again!

hb-jw avatar Sep 10 '24 13:09 hb-jw

@DachengLi1 Echoing @hb-jw 's comment - thanks for the prioritization :)

zade-twelvelabs avatar Sep 12 '24 00:09 zade-twelvelabs

Thank you for your amazing work! It's already Thursday, and I've been looking forward to it for a long time. Could you please tell me when the sequence parallel code will be open-sourced?

hb-jw avatar Sep 12 '24 15:09 hb-jw

Hi @hb-jw Sorry we have an internal regression that leads to small accuracy mis-match. If you are finding a quick solution, we have an implementation here: https://github.com/NVlabs/VILA/tree/main/llava/eval/vision_niah_vila.

DachengLi1 avatar Sep 13 '24 08:09 DachengLi1

This is a non generative example though, right? Can this be used for next token generation?

zade-twelvelabs avatar Sep 13 '24 08:09 zade-twelvelabs

Hi @DachengLi1 :)

image

zade-twelvelabs avatar Sep 20 '24 01:09 zade-twelvelabs

Hello,It has been a long time, but it feels as if only a short time has passed because I have been waiting for parallel inference in sequences. Do you still have plans to open-source the parallel inference code for sequences? If so, when exactly will it be? I am eagerly looking forward to it, waiting with bated breath, and eagerly await and thank you for your reply.

hb-jw avatar Sep 28 '24 12:09 hb-jw

Hi @hb-jw Sincerely apologize for this. We undergo a significant refactoring in the internal repo in support of more models, and this PR could hardly be merged. We are working on a deadline at Oct 2, and I will rewrite the PR accordingly, hopefully within one week of Oct 2.

DachengLi1 avatar Sep 28 '24 17:09 DachengLi1

Hi - updates? image

zade-twelvelabs avatar Oct 11 '24 19:10 zade-twelvelabs

+1

liuyijiang1994 avatar Oct 16 '24 07:10 liuyijiang1994

+1

hhaAndroid avatar Oct 30 '24 06:10 hhaAndroid

+1

hb-jw avatar Nov 09 '24 11:11 hb-jw