Why some json_mode is set to False when calling LLM?
❓ Questions and Help
We sincerely suggest you to carefully read the documentation. After that, if you still feel puzzled, please describe the question clearly under this issue.
Hi, I thought almost all LLM responses should be in json format and setting json_mode to True can bring better reliability, but I am confused that some json_mode is default or set to False when calling LLM. Could you kindly tell me the reason or when should I set json_mode as False?
Thank you very much for your response!
I think if you want the response to be in JSON format, you can always set json_mode to True. As you can see in rdagent/oai/backend/base.py (lines 548-549), json_mode is actually a custom parameter we defined ourselves:
if response_format is None and json_mode:
response_format = {"type": "json_object"}
If you want a more comprehensive understanding, you can refer to the different approaches shown in test/oai/test_completion.py—as we support all of them.
I think if you want the response to be in JSON format, you can always set
json_modetoTrue. As you can see inrdagent/oai/backend/base.py(lines 548-549),json_modeis actually a custom parameter we defined ourselves:if response_format is None and json_mode: response_format = {"type": "json_object"} If you want a more comprehensive understanding, you can refer to the different approaches shown in
test/oai/test_completion.py—as we support all of them.
Thanks for your answer!