gemini-cli icon indicating copy to clipboard operation
gemini-cli copied to clipboard

Logs that are too large are dropped

Open bobcatfish opened this issue 2 months ago • 1 comments

What happened?

When logs are too big to be consumed by the logging endpoint (GCP in my case), an error occurs and they are dropped. For example:

│ ✖ {"stack":"Error: 3 INVALID_ARGUMENT: Log entry with size 438.0K exceeds maximum size of 256.0K\n    at           │
│   callErrorFromStatus (file:///home/user/gemini-cli/bundle/gemini.js:96853:21)\n    at Object.onReceiveStatus       │
│   (file:///home/user/gemini-cli/bundle/gemini.js:97533:70)\n    at Object.onReceiveStatus                           │
│   (file:///home/user/gemini-cli/bundle/gemini.js:97336:140)\n    at Object.onReceiveStatus                          │
│   (file:///home/user/gemini-cli/bundle/gemini.js:97302:178)\n    at                                                 │
│   file:///home/user/gemini-cli/bundle/gemini.js:110487:77\n    at process.processTicksAndRejections                 │
│   (node:internal/process/task_queues:77:11)\nfor call at\n    at ServiceClientImpl.makeUnaryRequest                 │
│   (file:///home/user/gemini-cli/bundle/gemini.js:97503:32)\n    at ServiceClientImpl.<anonymous>                    │
│   (file:///home/user/gemini-cli/bundle/gemini.js:97811:19)\n    at                                                  │
│   file:///home/user/gemini-cli/bundle/gemini.js:185003:25\n    at                                                   │
│   file:///home/user/gemini-cli/bundle/gemini.js:174149:16\n    at repeat                                            │
│   (file:///home/user/gemini-cli/bundle/gemini.js:174198:23)\n    at Task._apiCall                                   │
│   (file:///home/user/gemini-cli/bundle/gemini.js:174234:11)\n    at Task.run                                        │
│   (file:///home/user/gemini-cli/bundle/gemini.js:172548:35)\n    at BundleExecutor._runNow                          │
│   (file:///home/user/gemini-cli/bundle/gemini.js:172788:14)\n    at Timeout._onTimeout                              │
│   (file:///home/user/gemini-cli/bundle/gemini.js:172734:18)\n    at listOnTimeout                                   │
│   (node:internal/timers:581:17)","message":"3 INVALID_ARGUMENT: Log entry with size 438.0K exceeds maximum size of  │
│   256.0K","code":"3","details":"Log entry with size 438.0K exceeds maximum size of 256.0K","metadata":"[object      │
│   Object]","note":"Exception occurred in retry method that was not classified as transient","name":"Error"}         │

What did you expect to happen?

Probably the log should be truncated in this case. Dropping it would at least be less messy but I believe we still want the log emitted. I'm not quite sure HOW we want to truncate this, (would be great if https://github.com/open-telemetry/semantic-conventions/blob/main/docs/gen-ai/gen-ai-events.md provided some guidance), especially since we're talking about json objects that we can't just cut off without breaking.

Client information

n/a

Login information

n/a

Anything else we need to know?

When telemetry is turned on, the user's prompt is logged by default, and with the changes I've been making in #9215, the entire request Parts, response Parts and system instructions are also logged, so it's very easy to get into a state where these are larger than the 256k limit.

bobcatfish avatar Oct 23 '25 21:10 bobcatfish

thanks for the detailed report!

I am triaging to investigation.

silviojr avatar Oct 23 '25 22:10 silviojr