textgrad icon indicating copy to clipboard operation
textgrad copied to clipboard

Set sampling temperature in MultiModalLLM calls

Open nysp78 opened this issue 8 months ago • 3 comments

Hi, How can I randomize the response of a multimodel api call by setting parameters such as temperature, top_k, top_p?

Thanks!

nysp78 avatar Apr 15 '25 09:04 nysp78

Can you share the snippet you are trying to use?

Be sure that caching is not active. We have two engines an older more stable version that supports parameters and a new one that is still under evaluation.

For the latter, take a look at this PR: https://github.com/zou-group/textgrad/pull/159

vinid avatar Apr 15 '25 14:04 vinid

I'm using the standard code from the multimodal ipynb example.

tg.set_backward_engine("gpt-4o", override=True) question_variable = tg.Variable(prompt_3, role_description="instruction to the VLM", requires_grad=False)

response = MultimodalLLMCall("gpt-4-turbo")([image_variable, question_variable])

nysp78 avatar Apr 15 '25 16:04 nysp78

i think the right way of adding this would probably be by editing the "engine" class so that it can accepts some default parameters during initialization.

This means that we should add those [arguments to the engine class](https://github.com/zou-group/textgrad/blob/main/textgrad/engine/openai.py

in this way we could generate the Engine outside the function and then pass it to the various ops.

If you have time to implement this we would love to get this contribution in the repo!

vinid avatar Apr 15 '25 17:04 vinid