Curt Siffert

Results 23 comments of Curt Siffert
trafficstars

I poked at this a while with my gpt-oss-20b model. I used ollama to make a 32k context for it, but it still didn't help. It's unclear what makes it...

I did something similar with gpt-oss:20b , creating a 32k model, and my request still gets forgotten, as it does with the vanilla `codex --oss` with `ollama serve` running. This...

When I use the LMStudio server instead of ollama, and load gpt-oss-20b there with expanded context, initial indications from codex cli look good; I've had some complete exchanges with changes...