Federico Bianchi
Federico Bianchi
i think the right way of adding this would probably be by editing the "engine" class so that it can accepts some default parameters during initialization. This means that we...
Hi! Yes it is possible! Are you running LLMs locally or through API services? We support openai compatible endpoints by default so that should be easy to do. If you...
Yup! Just define your own engine class: https://github.com/zou-group/textgrad/blob/main/textgrad/engine_experimental/litellm.py Basically implement something like that but instead of litellm generate you will need to have your forward passes. It's not going to...
You just need to customize the two generate calls. In particular `_generate_from_multiple_input`. We basically format stuff like openai wants there but you might need to get the image bytes and...
It mostly depends on what you want to do. You can load that model and expose an openai compatible api (with something like vllm). Or you can write your own...
Yea! Definitely On Sun, Apr 13, 2025, 02:15 Nikos Spyrou ***@***.***> wrote: > Ok, that's cool! I suppose I can define the same custom egine as > backward_engine as well....
yea we unfortunately never fixed this issue #96, would you have time to send a fix in a PR?
this PR should hopefully fix that #159, but i need to do some more testing
Hello! Yes, you should remove the `raise Exception` in the `run_function_in_interpreter` function. We do this to ensure you are aware you are running model-generated code. It is always best to...
The variable short is used to reduce the entire context. @mertyg should we had an option to switch this on/off for people that are ok with longer context?