Lumen Yang
Lumen Yang
> Hi jdanielmourao, Sorry that I've missed this message and now I have tested it. It seems that the space before and after vertical slash is not the problem. I...
Hi, everyone. I've be surfing in the code base of LLaVA for a while and find it hard to find the exact `generate()` function implementation for llama based LLaVa. I'm...
> > > This is my experiments with prompt tunning. Not perfect but pretty amazing > > Seems like img1,img2,text is better performing. Hi, SeungyounShin. Would you mind sharing how...
Thanks for the great work. It is really a really great contribution to the neovim ecosystem already! Hope you are having enough fun with your new editor as well!
Sorry to reopen this issue but I have the same problem again and I'm sure my device is not OOMed since I used my machine to run smoothly the 13b...
> Thanks for reporting. Does [b42a13d](https://github.com/haotian-liu/LLaVA/commit/b42a13d14b6118381a667430a5b8c50f9790dee3) fix it? Thanks a lot Haotian. Yes this fixed my problem. Interestingly though, is that my previous patch was only a bit different from...
> ``` > tensor_on_device = tensor.to(device) > ``` > > `.to` is not an inplace operator Thanks for your prompt reply. Okey, I should have remember it :).
That is the problem I observed several days ago. but I didn't make a PR since it was not clear how should we distinguish different behaviors from model. I hope...
> hi, I think you should download this project: https://github.com/antoyang/captioning-metrics and rename it as metrics, put it into vid2seq folder. But I got stuck in t5 project. I only have...
> Thanks flxzt for the quick reply. Yes I can provide you with the document. I did realize that it is not from aur but directly from the arch community...