htmx icon indicating copy to clipboard operation
htmx copied to clipboard

Support for chunked transfer encoding

Open carlos-verdes opened this issue 2 years ago • 38 comments

Hi, I have a server that is responding with a http chunked response to load data as it's available (also it reduces the consumption of memory on the server): https://en.wikipedia.org/wiki/Chunked_transfer_encoding

When I make a direct call with my browser I can see how the results pop-up as they are flushed from the server (as expected) however if I use hx-get I don't see anything in the screen until the full response is sent (not the expected behavior)

I see there is support for SSE and Web Sockets, is there a plan to support this feature also?

carlos-verdes avatar Oct 20 '23 08:10 carlos-verdes

This doesn't answer your question, but have you considered using the newer Streams API instead of chunked transfer encoding? https://developer.mozilla.org/en-US/docs/Web/API/Streams_API

You might also consider using a service worker to fix this problem more immediately than htmx might get around to it - it'll intercept all network requests and you could deal with it as needed (as well as do LOTS more)

nickchomey avatar Oct 20 '23 15:10 nickchomey

I use that API to read from the server when doing SPA development, that's why I'm asking if HTMX will support this feature or not

The good thing about it is you don't need to change the protocol and most of HTTP and actually HTMX just works, the only problem is behavior is not as expected (HTMX waits for full response to be sent before rendering anything back to the user)

carlos-verdes avatar Oct 23 '23 05:10 carlos-verdes

I really need this feature (http streaming).

joetifa2003 avatar Nov 06 '23 19:11 joetifa2003

It would be lovely to see this supported

sjc5 avatar Nov 14 '23 15:11 sjc5

+1

Example use case: Streaming search results. Instead of complex "infinite scroll" or other types of convoluted result delivery schemes, we could just start pushing results to the client as soon as we get first hits from the database - client could start loading and rendering images etc for the first entries in listing right away. Ohh, it would be so straightforward and beautiful, so old school in the bestest of ways.

jtoppine avatar Nov 14 '23 21:11 jtoppine

I send all collections from my backend using streams to avoid also memory pressure on big collections so for me is a natural thing to do

carlos-verdes avatar Nov 16 '23 06:11 carlos-verdes

Had to close the above PR but I think the existing extension mechanism should be more than sufficient to implement this. Would love if someone wanted to take that on.

alexpetros avatar Dec 20 '23 19:12 alexpetros

@alexpetros I'm on it ;)

douglasduteil avatar Dec 26 '23 10:12 douglasduteil

Are you doing extension for this @douglasduteil ? Can you share link to the PR when ready?

carlos-verdes avatar Dec 26 '23 18:12 carlos-verdes

🎉 https://github.com/douglasduteil/htmx.ext...chunked-transfer 🎉

\to @carlos-verdes it's christmas time again 🎄

Install

$ npm install htmx.ext...chunked-transfer
<script src="https://unpkg.com/htmx.ext...chunked-transfer/dist/index.js"></script>

Usage

<body hx-ext="chunked-transfer">
  ...
</body>

:warning: It's a very early version that I'm not using myself

douglasduteil avatar Dec 26 '23 20:12 douglasduteil

I don't know if there is a plan to add this into the base of HTMX or rely on extensions but I thought this is an example of the functionality folks are looking for https://livewire.laravel.com/docs/wire-stream. The ability to append vs replace is a nice addition as well.

I would bet a good deal of this is around streaming back AI based content. While using SSE or webhooks is an option it adds complexity to infra depending on the surrounding infra. The chunked transfer encoding feels clean because once all the data is sent the connection is closed vs SSE needing to be replaced to be closed with no real "polite close" or having to deal with websockets in general.

The other simplicity comes from not having to deal with channels for SSE or websockets on the server for multiple client, when you want to send back to only the sender the websocket / sse solutions feel heavyweight.

mattbrandman avatar Feb 04 '24 05:02 mattbrandman

I have an ES5 version of an extension that supports the chunked encoding that I've been using internally at the company I work for.

https://github.com/JEBailey/htmx/blob/master/src/ext/chunked.js

JEBailey avatar Apr 25 '24 13:04 JEBailey

So I originally closed #2101 because 2.0 was coming up and we weren't going to make that happen in time. I'm seeing some compelling use-cases and @douglasduteil's extension looks like it's been working. Are people using it? What's the case for including this in core?

alexpetros avatar Aug 09 '24 15:08 alexpetros

one usecase would be a fairly simple chatbot app which supports streaming. no need for websockets, no need for server-sent events, no need for keeping a connection to the server. you could simply make a request to the server, the server sends a Transfer-Encoding: chunked response, which will then incrementally be swapped/added into the respective chat bubble.

fabge avatar Aug 09 '24 19:08 fabge

So I originally closed #2101 because 2.0 was coming up and we weren't going to make that happen in time. I'm seeing some compelling use-cases and @douglasduteil's extension looks like it's been working. Are people using it? What's the case for including this in core?

Hi @alexpetros,

Yes, I'm using @douglasduteil's extension in https://github.com/runeksvendsen/haskell-function-graph and it's working for me. Thank you @douglasduteil!

The case for including it in core is that it solves a very generic problem: you don't want the user to wait for the very last part of your page to be received until the first part of the page is shown. The larger the time difference between the backend having the first and last result available the worse this problem is.
In my case, I have a page that includes in the following order: (1) a list of results, where the first result is usually available to the backend very quickly (within a few milliseconds) and the last result can take an additional ~second to become available; followed by (2) an SVG graph that's slow to generate, because it calls out to a CLI executable. Without this extension, the user has to wait around ~two seconds to see the first results, even though they're available to the backend (and sent to the client) almost immediately.

runeksvendsen avatar Aug 10 '24 08:08 runeksvendsen