groundingLMM icon indicating copy to clipboard operation
groundingLMM copied to clipboard

the demo caption is very simple, reproduce the result in paper

Open trouble-maker007 opened this issue 1 year ago • 10 comments

the demo caption is very simple, not like the detailed one in the paper, did you limit the output max length?

trouble-maker007 avatar Feb 06 '24 08:02 trouble-maker007

Hi @trouble-maker007,

Thank you for your interest in our work. Could you please share the image and corresponding input prompt that you tried to better assist you?

Thanks

mmaaz60 avatar Feb 06 '24 13:02 mmaaz60

@mmaaz60 sorry for the late response, the caption result is quite simple image image

trouble-maker007 avatar Feb 18 '24 07:02 trouble-maker007

@hanoonaR @mmaaz60 why close, you did not give a response?

trouble-maker007 avatar Mar 26 '24 07:03 trouble-maker007

Hi @trouble-maker007,

Please share the original images, not screenshots. Our model's responses vary with different images. The paper's examples come from various models, including the full-scope GLaMM and the GCG model, which is fine-tuned for grounding interleaved captioning. We haven't set a limit on response length.

hanoonaR avatar Mar 26 '24 08:03 hanoonaR

@hanoonaR I have try many image, and upload one for example, include your image in your paper, maybe you should reproduce the image in your paper with the online demo

trouble-maker007 avatar Mar 26 '24 13:03 trouble-maker007

demo_result

Hi @trouble-maker007 ,

We are not facing any issues in reproducing the results in the paper with the live demo.

hanoonaR avatar Mar 31 '24 11:03 hanoonaR

@hanoonaR the demo result with the ballon.jpg image while in the paper is very detailed image I think there is a big gap Segment one, lack of the river result image image

trouble-maker007 avatar Apr 01 '24 08:04 trouble-maker007

@hanoonaR I just change a little words in the prompt, the prompt meaning I think that not change much, but the result is different, looks like overfitting image

trouble-maker007 avatar Apr 01 '24 08:04 trouble-maker007

Let me clarify a few points:

  1. The GLaMM paper covers a broad range of contributions, including detailed analyses of tasks such as image-level captioning and segmentation. Our open-source release includes both a full-scope model and models fine-tuned for specific application. The image captioning results showcased in the particular figure you are showing are from a fine-tuned model.

  2. You are trying phrase grounding in the second example - but the demo model does not support this feature. We have not released this specific model, and accordingly, the demo's instructions do not cover its use.

  3. The generative model is trained on a diverse set of prompts for each task, chosen randomly. This approach can lead to variations in the output. We encourage you to review the quantitative results presented in both the paper and the codebase to better understand the model's capabilities.

Before raising concerns about reproducing results, we urge you to diligently utilize the codebase and the documentation. We can confidently reproduce the results and are here to assist with any genuine issues encountered.

hanoonaR avatar Apr 01 '24 09:04 hanoonaR

@hanoonaR I still don't fully agree with your explanation for the second case. I only changed one word in your demo example from "this" to "the", and the result changed significantly. This seems to indicate that your model has simply memorized the results for this specific prompt and image, rather than having generalization capabilities. image

trouble-maker007 avatar Apr 02 '24 13:04 trouble-maker007