Calvin Smith
Calvin Smith
This is great, I've been wanting to test this idea for a while and your description of the problem/solution is spot-on. I'll spend some more time digging into this in...
> Wait with your evaluation @csmith49 The current condensation is buggy. The wrong events are being removed, because I assumed the list of events to match the list of messages,...
@happyherp I finished taking a look at the data. To test this condenser, I ran three runs over a subset of 50 SWE-bench instances with a max of 150 iterations...
I think it might have been #7781 -- there's still a big token consumption spike when the summaries are produced because we can't use the cache (if you look at...
> I looked into that. Turns out there are 2 llm-calls happening during a condensation by LLMSummarizingCondenser and their token metrics are not included in the trajectory.json. I made an...
I'm still investigating this issue but have yet to successfully recreate it on my end. Running using the provided Docker command or in [CLI](https://docs.all-hands.dev/modules/usage/how-to/cli-mode)/[headless](https://docs.all-hands.dev/modules/usage/how-to/headless-mode) mode all seem to work without...
> @csmith49 Root cause: [Settings](https://github.com/All-Hands-AI/OpenHands/blob/92b8d55c2db6884e7883065a9003fc093903f6c2/openhands/server/settings.py#L5) class is not converted. I've seen OpenHands and Copilot suggest that as the problem but I can't seem to square that with the control flow...
> Did you send any msg in that chat? I believe I did send chat messages, and everything worked as expected. > @csmith49, here the `api_key` data type is changed....
@xingyaoww Any thoughts on moving the SWE-gym control flow to the mode mechanism used here?
> @csmith49 just so I understand, will you be finishing up this PR once you are done with your current work? Or does your current work also include what this...