chameleon-llm
chameleon-llm copied to clipboard
Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".
../results/scienceqa/chameleon_chatgpt_minitest.json Result file exists: ../results/scienceqa/chameleon_chatgpt_minitest.json Count: 100, Correct: 44, Wrong: 56
The example on the [main page](https://github.com/lupantech/chameleon-llm) does not seem to work. Am I missing something? ``` pip install -r requirements.txt cd run_scienceqa python run.py \ > --model chameleon \ >...
Thank you for your work. When running the TabWMP dataset, I found that some of the examples are executing very slowly, is there any way you can speed them up?...
Hi authors, Why use_caption is disabled in default? And do you use the caption for results reported in the paper? Thanks a lot!
Hi @lupantech, thank you for your excellent work. I observed inconsistent accuracies on the minitest set. Specifically, I got acc_average values of 49.29 for gpt-3.5-turbo and 46.93 for Llama-2-7b, while...
Hi, thanks for your great work. I want to ask could this method apply to open VQA task where an open answer is needed instead of choosing from a given...
Hello, I would like to ask if the planner here refers to the natural language planner or is there another name? 
Hi, I am writing to kindly request an update regarding the release of the code for the Image Captioner and Text Detector modules as promised in the README file of...
I noticed that you have generated bing_search results during the chameleon run, can you open source this part of the file? ” bing_file“:”. /data/scienceqa/bing_responses.json”