groundingLMM
groundingLMM copied to clipboard
the demo caption is very simple, reproduce the result in paper
the demo caption is very simple, not like the detailed one in the paper, did you limit the output max length?
Hi @trouble-maker007,
Thank you for your interest in our work. Could you please share the image and corresponding input prompt that you tried to better assist you?
Thanks
@mmaaz60 sorry for the late response, the caption result is quite simple
@hanoonaR @mmaaz60 why close, you did not give a response?
Hi @trouble-maker007,
Please share the original images, not screenshots. Our model's responses vary with different images. The paper's examples come from various models, including the full-scope GLaMM and the GCG model, which is fine-tuned for grounding interleaved captioning. We haven't set a limit on response length.
@hanoonaR I have try many image, and upload one for example, include your image in your paper, maybe you should reproduce the image in your paper with the online demo
Hi @trouble-maker007 ,
We are not facing any issues in reproducing the results in the paper with the live demo.
@hanoonaR the demo result with the ballon.jpg
while in the paper is very detailed
I think there is a big gap
Segment one, lack of the river result
@hanoonaR I just change a little words in the prompt, the prompt meaning I think that not change much, but the result is different, looks like overfitting
Let me clarify a few points:
-
The GLaMM paper covers a broad range of contributions, including detailed analyses of tasks such as image-level captioning and segmentation. Our open-source release includes both a full-scope model and models fine-tuned for specific application. The image captioning results showcased in the particular figure you are showing are from a fine-tuned model.
-
You are trying phrase grounding in the second example - but the demo model does not support this feature. We have not released this specific model, and accordingly, the demo's instructions do not cover its use.
-
The generative model is trained on a diverse set of prompts for each task, chosen randomly. This approach can lead to variations in the output. We encourage you to review the quantitative results presented in both the paper and the codebase to better understand the model's capabilities.
Before raising concerns about reproducing results, we urge you to diligently utilize the codebase and the documentation. We can confidently reproduce the results and are here to assist with any genuine issues encountered.
@hanoonaR I still don't fully agree with your explanation for the second case. I only changed one word in your demo example from "this" to "the", and the result changed significantly. This seems to indicate that your model has simply memorized the results for this specific prompt and image, rather than having generalization capabilities.