Jee Jee Li

Results 206 comments of Jee Jee Li

> I'm able to repro the failure in `test_llama.py` locally (it passes on `main` branch) but not `test_minicpmv.py`. This PR should not affect the llama. I will also verify and...

@DarkLight1337 Which GPU do you use locally? I can successfully run the llama and minicpmv2.5 LORA tests on the local A800 GPU.

I also encountered a similar problem with a failed Llama test about a month ago. See: https://buildkite.com/vllm/ci-aws/builds/6358#01912bcf-2a2f-4655-9f58-c3f5ae8ea68a

> @DarkLight1337 Which GPU do you use locally? I can successfully run the llama and minicpmv2.5 LORA tests on the local A800 GPU. @DarkLight1337 I had issues with my previous...

> Can you successfully run the TP tests locally? I have tested `test_minicpmv_tp.py` locally, and passed successfully.

> I was able to work around this bug by building the latest Triton code from source. It seems that the triton main has updated the related code, I don't...

> So I've been digging into this a bit more and here is a summary of my findings: > > * Triton recently released v3.0.0, but it does **not** seem...

> Is there a HF repo that can be used to test this? Not yet, if needed, I can train and add tests

> > > @imkero @wulipc do you have any LoRA-tuned models that can be used? cc @ywang96 > > > > > > Sorry I don't have one currently >...