Interactive debugging support
Is your feature request related to a problem? Please describe.
FEEDBACK WANTED
1. Problem Statement
AI agents (like those in Gemini CLI, Cursor, etc.) are becoming very effective at static analysis—reading code, understanding logic, and proposing changes.
Currently, an AI's only method for runtime debugging is to:
- Read the source code and form a hypothesis.
- Add
console.logstatements. - Run the app and read the console output.
- Remove the
console.logstatements.
This "printf debugging" workflow is slow, intrusive (can create "heisenbugs"), and token-inefficient. Most importantly, it fails to solve entire classes of complex bugs related to application state, silent errors, or race conditions.
2. Problems We Want to Solve
We believe there are three main categories of problems that AI agents currently cannot solve. We would like to know which of these are the most painful.
Problem 1: Non-Intrusive Inspection (The "Logpoint" Problem) An agent needs to verify a hypothesis by inspecting a variable's value at a specific line of code without modifying the source files.
- Goal: "I need to know the value of
movie.idonMovieCard.vue:129when the user clicks the button."
Problem 2: Discovering "Unknown Unknowns" (The "Event-Breakpoint" Problem) An agent needs to find the unknown line of code that is responsible for a known runtime event.
- Goal (Exceptions): "A silent
try...catchis hiding an error. I need to find the exact line that threw the error and inspect the local variables at that moment, before the call stack is lost." - Goal (DOM): "Something is adding a
disabledattribute to my button, but I don't know what. I need to find the exact line of JavaScript that modified the DOM, even if it's hidden inside a framework's rendering logic." - Goal (Events): "A generic
clicklistener is firing by mistake. I need to find which listener is the culprit."
Problem 3: Deep Forensic Analysis (The "Breakpoint" Problem) When a hypothesis is wrong, an agent needs to do a full "forensic" analysis.
- Goal: "My hypothesis was wrong. I need to pause execution on a specific line, inspect the entire call stack and all local variables (scope), and then step forward to understand the application's flow."
Foundational Requirement: All these problems must be solved in the context of the authored source code (e.g., .ts, .vue), not the bundled/minified output. Any solution must handle source maps transparently.
3. Key Use Cases (As Problems)
- As an... AI agent debugging a silent
try...catcherror, I want to know what the local variables were at the moment the error was thrown, before the stack is unwound. - As an... AI agent debugging a UI bug, I want to find the exact line of JavaScript that is incorrectly adding the
disabledattribute to my button, even if it's hidden inside a framework's rendering logic. - As an... AI agent debugging a complex bug, I want to pause execution on a specific line, inspect the entire call stack and all local variables, and then step forward to understand the application's flow.
- As an... AI agent, I want to confirm my hypothesis that
movie.idisundefinedby logging its value on a specific line, without having to modify the file.
4. Feedback Requested
We are posting this to validate our assumptions before designing a solution. We would love to know:
- Does this capability (runtime debugging for AI) seem useful to you?
- (Please add a 👍 reaction to this issue if you would find this useful!)
- Which of the problems described above is the most painful or high-value for you?
- (Please help us rank them! e.g., "1. Problem 2 (Exceptions), 2. Problem 1 (Logpoints), 3. Problem 3 (Stepping)")
- What is your "80/20"?
- (Our research suggests "non-intrusive logging" (Problem 1) would be the 80% use case. Do you agree? Or is full interactive pausing (Problem 3) more critical?)
- How do you currently work around these limitations?
- (Are you manually adding
console.logstatements? Or do you take over from the AI and use DevTools yourself?)
- (Are you manually adding
- What did we miss?
- (Are there other runtime debugging scenarios or problems that your AI agent faces?)
Describe the solution you'd like
s/o
Describe alternatives you've considered
s/o
Additional context
No response
For me, in this order: Problem 1, 3 then 2.
Sometimes I'm thinking perhaps 3 (The "Breakpoint" Problem) is most important. Adding capability for the AI to set breakpoints and step through code (without hogging context window easily) would be a game changer for sure. Very excited if this would be implemented.
Relevant: https://github.com/ChromeDevTools/chrome-devtools-mcp/issues/567
I was looking to see if there were logpoint-related tools in the mcp toolset. Would absolutely put that at the top of this list and it would be a useful increment on its own. Right now it's necessary to tell the agent to put in log statements and then redeploy (don't ask). If it could dynamically add and remove logpoints then that whole loop would be autonomous and it could explore true state to get better reasoning and problem solving.
Which of the problems described above is the most painful or high-value for you? Problem 3,2,1
Problem 1 is lightweight and useful - but also I can do that already locally ;). I found as long as the log messages you add as and end user helps the agent to understand how to proceed it's quite flexible.
Problem 2/3 is the really useful ones for me. I would love it if an MCP would be able to pause / bring up the debugger when a condition happens, inspect a snapshot in time of the DOM / CSSOM + callstack to analyze slightly more complicated timing dependent issues (eg. "element "foo" is changing attribute 'bar' please pause when that happens and track down why this is happening, attempt a fix and report back").