Yonatan Bitton
Yonatan Bitton
Sorry - I ment from (1, 64, 48, 26) to (1, 26, 64, 48). I updated the post.
I added built and make: ``` cd /opt/caffe/build make runtest make -j8 make pycaffe ``` It worked ``` [100%] Built target pycaffe root@e9234b09a03f:/opt/caffe/build# ls ``` But still when activating the...
+1 - me as well
Thanks for the clarification @sachit-menon . It's also related to the issue I opened: [https://github.com/salesforce/LAVIS/issues/97](url) I think it will be very helpful to clarify which hyperparameters should be called for...
@Rijgersberg hello, it's been some time since this issue, but did you succeed on replicating these experiments? Thanks :)
Thanks for the response. I've managed to run your app with my model, this is very helpful, thanks! Now I want to run it on a list of png images....
Thanks for the response! @dxli94 I'm loading this model which I understand is the one you mentioned, isn't it? `load_model_and_preprocess(name="blip2_t5", model_type="pretrain_flant5xxl"..)` @LiJunnan1992 I've changed to nucleus_sampling and increased max_length from...
Thanks. You are correct 🙂 I took the following parameters and managed to reproduce 3 of the examples in the demo: ```python max_length = 30 length_penalty = 1 repetition_penalty =...
@LiJunnan1992 additionally, I want to reproduce the results on VQA, Image Captioning and Cross-Modal retrieval. Regarding cross-modal retrieval, I saw a [related issue](https://github.com/salesforce/LAVIS/issues/98), I understand that I can wait for...
Thanks! Actually it's different datasets, and the parameters to be used aren't clear from the [yaml script](https://github.com/salesforce/LAVIS/blob/main/lavis/projects/blip2/eval/caption_coco_flant5xl_eval.yaml) nor from the [evaluation script](https://github.com/salesforce/LAVIS/blob/main/evaluate.py). In other words, if someone wants to evaluate...