morphdom icon indicating copy to clipboard operation
morphdom copied to clipboard

documentation: add how to use it with HTML streaming

Open aralroca opened this issue 2 years ago • 0 comments

Now with the "server actions" many RPCs could stream HTML directly and use the dom diff to update the real dom changes. I think with morphdom it is feasible because it uses DFS just like how stream chunks arrive, but I have tried to implement it and I don't get away with it.

It would be nice to add documentation on how to use morphdom with streaming.

If it helps this is how I'm getting the nodes from chunks:

const START_CHUNK_SELECTOR = "S-C";
const START_CHUNK_COMMENT = `<!--${START_CHUNK_SELECTOR}-->`;
const decoder = new TextDecoder();
const parser = new DOMParser();

/**
 * Create a generator that extracts nodes from a stream of HTML.
 *
 * This is useful to work with the RPC response stream and
 * transform the HTML into a stream of nodes to use in the
 * diffing algorithm.
 */
export default async function* parseHTMLStream(
  streamReader: ReadableStreamDefaultReader<Uint8Array>,
  ignoreNodeTypes: Set<number> = new Set(),
  text = "",
): AsyncGenerator<Node> {
  const { done, value } = await streamReader.read();

  if (done) return;

  // Append the new chunk to the text with a marker.
  // This marker is necessary because without it, we
  // can't know where the new chunk starts and ends.
  text = `${text.replace(START_CHUNK_COMMENT, "")}${START_CHUNK_COMMENT}${decoder.decode(value)}`;

  // Find the start chunk node
  function startChunk() {
    return document
    .createTreeWalker(
      parser.parseFromString(text, "text/html"),
      128, /* NodeFilter.SHOW_COMMENT */
      {
          acceptNode:  (node) =>  node.textContent === START_CHUNK_SELECTOR 
            ? 1 /* NodeFilter.FILTER_ACCEPT */
            : 2 /* NodeFilter.FILTER_REJECT */
      }
    )
    .nextNode();
  }

  // Iterate over the chunk nodes
  for (
    let node = getNextNode(startChunk());
    node;
    node = getNextNode(node)
  ) {
    if(!ignoreNodeTypes.has(node.nodeType)) yield node;
  }

  // Continue reading the stream
  yield* await parseHTMLStream(streamReader, ignoreNodeTypes, text);
}

/**
 * Get the next node in the tree.
 * It uses depth-first search in order to work with the streamed HTML.
 */
export function getNextNode(
  node: Node | null,
  deeperDone?: Boolean,
): Node | null {
  if (!node) return null;
  if (node.childNodes.length && !deeperDone) return node.firstChild;
  return node.nextSibling ?? getNextNode(node.parentNode, true);
}

And this is the code how to use it:

const reader = res.body.getReader();

for await (const node of parseHTMLStream(reader)) {
  console.log(node);
}

aralroca avatar Feb 19 '24 20:02 aralroca