Support chunked responses
Hey!
A very old but functioning feature of HTTP is to respond to the browser chunked html. Instead of sending back your page once it is fully built on the backend, you can send "chunks" of text/html your browser will incrementally parse and render to the DOM.
Recently, someone tried to use that feature for SEO friendly fast loading pages. You send the browser an html page containing "loading placeholders" with id attributes and when your server is done rendering a more complicated component it had to call two database and an api for, it sends this new html embedded in a web component that automatically finds the identified html node it has to replace and then unhooks itself from the "framework", leaving you with an async html page, sent through chunks to browsers, allowing very fast first loads and then, rendering the page as the server builds it. The person has called that htms. Bundled with htmx, it transforms, imo, html into what it always was supposed to be: the engine in HATEOAS.
Yielding a chunked body to wisp allows us to enable this http feature: in my proof of concept, I start three concurrent processes which have to build an async HTML component and then, using the standard gleam library, I also create a yielding pipeline that first renders the simple html synchronously, then three yielders that call an actor to receive the processed html through it and finally a yielder to "clean" my DOM of the web components.
Maybe there could be a better way to implement that! But chunked responses are a standard HTTP specification and I thought Wisp could handle those, making it capable of more async work since it currently lacks SSE and web sockets because of the complexity behind the typing of the messages, since we can force it to be String/bytes. SSE and Websocket events are more of a backend concern, needing thorough typing, and thus, need more design work. Streaming back responses though, we can use Yielders, and those can then be typed with what Wisp already handles.
Would that be a thing the Wisp community would like?
Could be a good feature to support, but yielder is unsuited for the task in my opinion, we would need something else. If you look in the Ewe and Mist repos you'll see I've opened discussions about moving from yielder to some other design there.
What use cases do you have for chunked responses? Having those will inform the design in Wisp.
I need to be able to stream the response to the browser as text/html so that it can incrementally render it to the DOM. So the Body needs to be "rewritten to" when some part of the html is updated.
Could you expand on that please 🙏 so we can know the use case and understand the needs. You're describing a solution rather than the problem.
I'm not 100% sure it's the same usecase as I am looking for, but I think it is.
My problem is :
I am using a JS library called datastar (kind of htmx on steroids). It uses standard SSE requests (https://data-star.dev/reference/sse_events). So basically the browser does a request with content-type: text/event-stream, and the server can respond with 1 - N bits of data AND close the connection once it has finished its job.
In go, you can have a handler that counts from 1 to 10 and then returns, thus closing the request.
In gleam, you can do that using mist.sse, but there's no wrapper in wisp, and the mist docs say it's not possible to end the connection server-side. So I think it's another kind of see connection that is not the same as this simple "sse http request" ?
Having a way to "stream" some bits in a simple request might be a simpler approach and could allow for my usecase and the usecase of streaming html as well ?
-- EDIT
I did succeed in closing the request properly, I think, by just doing an actor.stop(), so I think my usecase is doable in mist as-is, although verbose:
fn count_updates_page(req: Request) -> Response {
mist.server_sent_events(
req,
initial_response: response.new(200),
init: fn(subject) {
let repeater =
repeatedly.call(1000, 0, fn(state, _count) {
process.send(subject, Count(state))
state + 1
})
Ok(actor.initialised(repeater))
},
loop: fn(repeater, message, connection) {
case message {
Count(count) -> {
use <- bool.lazy_guard(count > 10, return: fn() {
logging.log(logging.Info, "Stopping count")
actor.stop()
})
let event =
mist.event(string_tree.from_string(
"elements <p id=\"count\">count is "
<> int.to_string(count)
<> "</p>",
))
|> mist.event_name("datastar-patch-elements")
|> mist.event_id("count_" <> int.to_string(count))
│
case mist.send_event(connection, event) {
Ok(_) -> {
logging.log(logging.Info, "sent event: " <> string.inspect(event))
actor.continue(repeater)
}
Error(_) -> {
repeatedly.stop(repeater)
actor.stop()
}
}
}
}
},
)
}
SSE is different from chunked responses, so that use case wouldn't motivate adding chunked responses I'm afraid.