Optionally execute code in LLM output
Obsidian auto-executes code blocks for certain languages such as Mermaid. If you have a plugin like dataview, dataview and dataviewjs code are executed automatically too. Since LLM output is rendered with Obsidian's markdown renderer, they get auto execution too.
This is good and bad:
- Good: LLM output can show mermaid diagram and dataview result directly
- Bad: If the code is incorrect, you can't get to the source in the LLM output without copying the entire output to note
A better approach is to not execute the code blocks in the LLM output first, but leave a button for the user to trigger the run manually.
Discussion
2 sides of the argument:
- Auto code execution is on by user choice via other plugins like Dataview or Code Emitter, it is not Obsidian or Copilot's concern (at least at a conceptual level). Basically the user explicitly asks those plugin to execute codeblocks in Obsidian, Copilot Chat is just not excluded from it.
- From the user's point of view, Copilot Chat is not a place for chat mainly, code shouldn't be executed by default.
There's a case where a user has a JS auto-execution plugin, and the JS in LLM output gets auto-executed. This can be dangerous! There should be an option to "run" in LLM output.
https://x.com/4confusedemoji/status/1833709872057057607
Ah, yeah. Hi, that's me. This is what I settled on for the moment, but Obsidian makes it extremely annoying to remove the outer block if there's more than one.
The most straightforward way to do this as a plugin I can think of is to force replace the codeblockprocessor string with 'markdown' and inject a button that'll swap them out and rerun it.
Something like this appears to do everything you would need. https://github.com/mugiwara85/CodeblockCustomizer
In addition to the manual execution request for code blocks, is there any capability for the model to leverage the result of executed code as part of the ongoing context or response? This would enhance the interaction, especially in dynamic workflows where the output of the code could influence the subsequent steps or decisions.
Thank you for this wonderful plugin and for all your hard work in developing it. It’s truly appreciated!
In addition to the manual execution request for code blocks, is there any capability for the model to leverage the result of executed code as part of the ongoing context or response? This would enhance the interaction, especially in dynamic workflows where the output of the code could influence the subsequent steps or decisions.
Thank you for this wonderful plugin and for all your hard work in developing it. It’s truly appreciated!
Sounds like another agentic case for Copilot Plus. Can you add this as a feature request here? Thanks! https://github.com/logancyang/obsidian-copilot/discussions/726