Render streamed input incrementally
I wrote a simple CLI for talking to LLM APIs and I pipe markdown-formatted output into glow for rendering.
function ai() {
~/repos/llm-cli/main.ts "$@" | glow
}
I would like to be able to use streaming responses from LLM APIs and write the result to stdout chunk by chunk and have glow render it incrementally. But it seems glow always reads the full input before rendering:
https://github.com/charmbracelet/glow/blob/2430b0af3fd83bf776875da464a885bdb5f88d38/main.go#L246-L247
I understand why it is that way, I'm sure it's dramatically simpler. It's certainly what I would have done.
Alternatives
- I could simply not pipe to
glowwhen I have a streaming response (or stream raw markdown to stdout and then re-render withglowonce it's done) - Use your lovely
modsCLI instead, which does support streaming but does not support Claude
Update for anyone interested in this: I'm able to solve this outside of glow by accumulating the input and clearing and re-rendering the whole thing on each chunk.
import $ from "jsr:@david/[email protected]"
let inputBuffer = ""
const decoder = new TextDecoder()
for await (const chunk of Deno.stdin.readable) {
inputBuffer += decoder.decode(chunk)
// --style auto is there to force it to output styled
// https://github.com/charmbracelet/glow/blob/2430b0a/main.go#L158
const output = await $`glow --style auto`.stdinText(inputBuffer).text()
console.clear()
console.log(output)
}
https://github.com/charmbracelet/glow/assets/3612203/67b888e8-2224-484b-b71b-516690629c4e