Mennatullah Siam

Results 8 issues of Mennatullah Siam

Hi I was training the code and I bumped upon this issue which I wasn't sure why it was happening? In davis2017_youtubevos_ehem.py line 74 it sets the foreground to 255,...

Hello Thanks a lot for sharing the code. I am a bit confused about the setup you used for the metatraining stage. So during the validation you are using the...

Hello I am using tensorflow 1.4 , I was able to build your code successfully no problem and its running but the output optical flow is completely corrupted on the...

Hello I am wondering the finetuning baseline that is provided in the paper how many iterations is used in the finetuning stage? You just mention it takes 5.56 seconds for...

Hello, I am trying to submit to the Codalab competition: https://codalab.lisn.upsaclay.fr/competitions/15094 MeVIS Referring Video Segmentation. I have tried Chrome and Firefox and it still doesnt work. I have tried to...

Discussion

Hello I have a problem with submitting any results on the evaluation server for the validation set. I get exactly the same problem here: https://github.com/codalab/codalab-competitions/issues/3503 Except for me it doesn't...

Hello, I am sending a request to add [PixMMVP](https://huggingface.co/datasets/IVUlab/pixmmvp) and [PixCVBench](https://huggingface.co/datasets/IVUlab/pixcvbench) visual grounding benchmarks that were released in 2025. Code release is [here](https://github.com/MSiam/PixFoundation?tab=readme-ov-file) as well.

I have tried visual grounding for InterVL2.5-8B vs. Qwen2.5-VL-7B not using refCOCO but another referring det dataset and I always found Qwen2.5-VL performance is almost 2x better. I am wondering...