extract_json_between_markers doesn't handle responses missing json markers
I found when using this with Azure OpenAI that the llm responses were coming back as valid json but extract_json_between_markers was rejecting them because it is enforcing a check for the json marker which was not in the response.
I can submit a PR to add a fallback to try parsing the output as json without those markers if you want it.
Hi! Sure thing if you think there would be a measurable improvement to success rate, from our observations this could happen but unclear whether this happens that often or more than other types of JSON formatting issues!
I encountered raw json running perform_review with gpt4o on Azure. The change in the pr worked there, but I have not run this against other models.
Interesting - this is quite the pathological case since we ask for chain of thought steps as well as the JSON. We didn’t observe failures in formatting more than perhaps 1% of the time, are you observing higher?
I only ran the perform_review with a paper using gpt4o on Azure. It was always returned as well formatted json but the method was returning none because it did not have the json markers. With this fallback mechanism it is returned as valid json.
I used strictjson to reformat the responses, since I'm running locally, token is no longer an issue.