feat: Support for logging session request-response pairs as OpenAI messages
What does this PR do?
Fixes #7535
This pull request introduces a new trace logging feature for LLM request-response pairs, allowing users to save detailed traces of model interactions for debugging or auditing purposes. It adds a --trace-dir CLI option (also configurable via the OPENCODE_TRACE_DIR environment variable), implements a TraceLogger utility, and integrates trace logging into the LLM streaming pipeline.
Additionally, it has some workarounds for project directory handling for the run command due to bun dev issues. I can revert those.
The most important changes are:
Trace Logging Infrastructure:
- Added a new
TraceLoggerutility (trace-logger.ts) to create, update, and persist detailed trace logs of LLM requests and responses, including errors and system info. Traces are saved as JSON files in a configurable directory. - Introduced a
--trace-dirCLI option andOPENCODE_TRACE_DIRenv variable to enable trace logging, and initialized the logger during CLI startup. - Integrated trace logging into the LLM streaming pipeline: traces are created for each request, updated with streamed response data or errors, and written to disk upon completion.
- Add a env flag to disable plugin installation of node modules simply because a .opencode directory exists.
- This is necessary when large scale data generation is being done. Each execution of
opencode runis done on a separate directory with specific .opencode configuration of different permissions. This causes repeated network IO to download and install 6.4MB of opencode package in node modules and blows up disk space as well as network bandwidth.
- This is necessary when large scale data generation is being done. Each execution of
CLI and Project Directory Handling:
- Added a
--project-diroption to theruncommand, allowing users to specify the working directory for project execution. All relevant paths and server initialization now respect this directory.- I needed this because running
opencode runfrom the directory I wanted to wasnt working. Maybe this isnt strictly necessary and i just didnt use it properly with the bun dev setup.
- I needed this because running
Model Data Loading Fix:
- Replaced a macro import for model metadata with a runtime function that fetches model data from disk or the network, ensuring compatibility with browser conditions and
bun run.- Again, this seems to be an issue with bun dev / bun run setup. Maybe not necessary and i can revert in that case.
How did you verify your code works?
I ran the following command
opencode run {problem_text} --project-dir {task_dir} --trace-dir {trace_dir}
And here are some traces - 2026-01-09T22-43-08-873Z_ses_45b128da9ffevGAq3ugKoAeozK_trace_mk7gphvn_1smwbk5h.json 2026-01-09T22-43-40-672Z_ses_45b128da9ffevGAq3ugKoAeozK_trace_mk7gpm2h_8cwrj23j.json 2026-01-09T22-43-45-843Z_ses_45b128da9ffevGAq3ugKoAeozK_trace_mk7gqam5_fubms5zt.json