Calvin Smith

Results 29 comments of Calvin Smith

Hey @amirshawn, thanks for sharing your experience and the data. We've taken steps to ensure memory condensation is opt-in -- we're still running some experiments to figure out what impact...

And some responses to your other questions: > t think 32k is not usable for larger more complex projects with lots of instructiosn. The 32k needs to be adjustable for...

Something is definitely strange. I can't identify any issues with the condenser code ([this test](https://github.com/All-Hands-AI/OpenHands/blob/ac680e76887d2ba850f1b8eeb35e8941be26cd7f/tests/unit/test_condenser.py#L307) is not the most robust, but I can't square it with dropping the new half)....

Hmm, it's possible one of those `Trimming prompt to meet context window limitations` messages came from the condenser itself. More precisely, we call out to the condenser during `CodeActAgent.step`, and...

I cannot seem to recreate this behavior using our benchmarking infrastructure (subset of SWE-bench Verified, 250 max iterations, Claude 3.7 and the default condenser settings, OpenHands v0.28.1). The benchmarks also...

> @csmith49 I think you're not seeing it because these experiments above are on the initial session. Isn't that right? When running evals, there is no restore of an older...

I've added a quick PR that removes the messages. The truncation behavior should be unchanged: when we hit a max token limit, we drop the oldest half of the events....

> I think it's definitely something we should be able to opt out of. Has it always worked this way? The context truncation has worked this way for a long...

> I appologize for my rambling and most likely misunderstanding how this all works. I hope my feedback is at least valuable in some way! The feedback is definitely useful!...