ts-liveview
ts-liveview copied to clipboard
demo: http/2+ may make first page load faster (and more)
demo: using http2 for all resources may make first page load even faster (maybe also protocol switch to ws) (yes of course this is not the focus of the demo, but it helps on overall impression ;-)
actual state:


This is good suggestion, it seems trivial to make express work with http2, but I'll study how to make websocket (currently using ws on server-side) to work with http2.
Also, the demo server is running behind http-proxy, I will upgrade it to use http2-proxy to make the whole thing work.
I'm reading about http2. As my understanding, the benefit of http2 is it allows the server to actively push resources (js/css) to the browser's cache upon the first GET request, hence allow the whole download process to be finished earlier.
However, if we inline the script and style in the initial http response, the benefit would be minimal?
Also, we need websocket to pass client events to server, and to pass DOM update commands to the client, which seems not supported by http2. If we need to fire another http request to upgrade to websocket, it may be not beneficial overall?
I prefer to use websocket to pass client event instead of using ajax, to save the overhead of http request. I learn that the http headers are encoded in binary format with better compression in http2, but it is still more overhead than sending a websocket message?
For first time request, inline style and script may be beneficial, but for subsequent visits, having the styles and scripts in separate file may be more cache-friendly (and be able to leverage the benefit of http2?)
thanks for looking into details!
yes you are right, one main advantage of http2 is server push. But as far as I can overlook it, using websocket should still deliver superior speed.
imho there should be 2 other advantages where your ts-liveview would benefit from http2, but only on first page load:
- one may save a roundtrip when using http2 and tsl1.3 on the very beginning
- it should be possible to start another transfer before the first has ended (in screenshot above: start js download before html has finished)
when looking into the future: http3 should the winner...
For first time request, inline style and script may be beneficial, but for subsequent visits, having the styles and scripts in separate file may be more cache-friendly (and be able to leverage the benefit of http2?)
I would prefer seperate files, since in real world there is also css and maybe some other js file for animations or at least changing the very light "preview image" to the full images when this has finished loading in background...
just stumbled on this detailed answer to the question: does http2.0 make websocket obsolete? (the answer was constantly updated improved from 2015 to 2021) https://stackoverflow.com/questions/28582935/does-http-2-make-websockets-obsolete/42465368#42465368
The stackoverflow post also talked about server-sent-event. SSE looks good but it has a per-domain limitation (at most 6 connections), so it won't work well if there are multiple tabs opened. But it seems this limitation is not imposed on http2 connections?
If SSE is usable, it may be more preferable, because it is more reliable. With websocket, we need to manually detect and resent for dropped messages.
hmm...
https://caniuse.com/websockets vs https://caniuse.com/eventsource are these the right ones? details: https://html.spec.whatwg.org/multipage/server-sent-events.html#server-sent-events
if one really think about changing WS for another tec -> maybe one should also directly look into http 3.0 (for direct use or to prepare an upgrade path)?
https://www.cloudflare.com/learning/performance/what-is-http3/
maybe not yet, but soon ;-) https://caniuse.com/?search=http%2F3
http/2 in node.js: https://www.section.io/engineering-education/http2-in-nodejs/
in this 3 part series, some good details are described, not only about http/3 as the article suggests ,but also (indirectly) about http/2 https://www.smashingmagazine.com/2021/09/http3-practical-deployment-options-part3/
last one: great tech detail about the benefits of each http version 1/2/3 : https://www.toptal.com/web/performance-working-with-http-3
really the last comment ;-)
DEMO:
one of the fastest sites on first load I could find (site is not light, using fast connection/same as for https://github.com/beenotung/ts-liveview/issues/4#issue-1232343236 )
- static
- http/2 and
- TSLv1.3
https://pulsar.apache.org/case-studies/
(of course, measured latency also depends on the distance: browser - server/datacenter)


detail comparison for the first element:
of course, measured latency also depends on the distance: browser - server/datacenter
but the structure is different between http/1.1 with TSLv1.2 and http/2.0 with TSLv1.3

sorry for spamming (->maybe moving from issues to discussions?), but this may be interesting?!
regarding using http/3 and "fallback" to http/2 to support all browsers: https://stackoverflow.com/questions/61172217/does-http3-quic-fall-back-to-tls-1-2-if-the-browser-doesnt-support-quic
and a minimal server config for even further clarification (yes it's for nginx and old but the main principle should keep the same..): https://blog.cloudflare.com/experiment-with-http-3-using-nginx-and-quiche/
=> with this in mind, it may not be too early to think about http/3 :-)
Thanks for the updates. It seems safari doesn't support http3 yet.
The example on how to push static content with express is helpful. Your example used spdy instead of http2 module to create the web server, It seems the different between spdy and http2 is they used different algorithm to compress the http header.
With http2, the server actively push the static resources (css/js) to the browser's cache, it seems to have similar performace as inlining the styles and scripts in the http response. The overall performance for initial page load should be similar?
For subsequence routing, when the user switch to another view, part of the dom should be updated, and the new view will require new styles. In that case, if the dom update command is sent from websocket, and the styles get pushed from the previous http2 connection, the setup seems more tricky than having the style inline with the rest of dom elements (in json over ws).
The server push behaviour may be beneficial to push images referenced by the img elements thought.
In this case, maybe it's better to run the server with multiple versions of http at the same time, and upgrade the connection when supported [1]
Thanks again for the links, it seems it would be benefial in turns of performance after switching to http3 (quic over UDP) even when we do not leverage the server push feature, because using QUIC we can have shorter handshaking overhead and resend lost packages in lower layer (hence earlier when it is needed) [2]
Also, the seems great that QUIC server doens't have to listen on port 443 when the http1/http2 server response with header Alt-Svc: h3=":_the_port_".
good to read that there were some interesting points included :-)
just some advertising for 3 tiny tools: When working in these field and using firefox, there are three addons perfectly helping to get an overview of the needed infos regarding current
- TLS version (shown) and details (on click)
- http version (indicated per color) and
- WS usage (symbol appears when established)

https://addons.mozilla.org/de/firefox/addon/indicatetls/ source: https://github.com/jannispinter/indicatetls
https://addons.mozilla.org/de/firefox/addon/http2-indicator/ source: https://github.com/bsiegel/http-version-indicator
https://addons.mozilla.org/de/firefox/addon/websocket-detector/ source: https://github.com/mbnuqw/ws-detector
examples: using these tools one easily stumble across
- your website uses http 1.1 (leads to this issue...)
- the gif on the editor page seems to be the only resource using "only" TLS v1.2 on your site (all other use v1.3) ps: its only loaded if browser cache is deactivated...

Thanks for sharing the tools. I was not awared the linked image was using older version TLS.
I'll update the link with https image proxy like https://images.weserv.nl This proxy server is using TLS 1.3
just another thought on relying on 2 different connection types (http and ws): you have to take care for security and abuse of 2 different stacks. Stumbled across this topic when searching for background on how to protect against DDoS and unfriendly bots...
for ws and security this one was interesting: WebSocket Security: Top 8 Vulnerabilities and How to Solve Them https://brightsec.com/blog/websocket-security-top-vulnerabilities/
I'm considering sse over http2 vs ws.
When using sse (server-side event) over http2, it seems there can have at most 100 connections among all the tabs, which seems plenty. The EventSource in browser will auto reconnect when the network fails, and auto ask for missing events between two connections.
Even if we need to fallback to http 1 for some browser, we can workaround the maximum 6 connection limit with storage event [1]
However, it may incur more latency on the interaction as sse doesn't support sending messages from the client side (hence will need to use ajax to send events from client to server with additional http header in the request and response)
[1] https://fastmail.blog/historical/inter-tab-communication-using-local-storage/
very interesting details and background.
I'm not sure regarding the fallback and workaround described in [1] from 2012. Maybe modern browsers (2022) would throttle tabs/connection in background aggressively to save battery on mobile devices - this may break the master tab solution suggested in [1] or at least add additional complexity... since these things highly depend on browser type and could change over time... just an impression on this topic https://news.ycombinator.com/item?id=13471543
Regarding latency, one may need a test... of course there was a great reason why ws was choosen by Phoenix LiveView (and you): latency, but also some others.. The question is:
- is this the best solution today for latency, or could it be matched by others protocols and advanced configurations
- is it the best solution in this environment (node world, not elixir/phoenix world)
- does it advantage compensate for other things (e.g. additional complexity for setup, maintenance, security...)
hmm everything is a compromise :-)
just looked through the linked sources again:
http/2 seems to support full bidi streaming, so there seems to be no latency disadvantages (no need for ajax)
as far as I get, one need to use:
- SSE to push messages down 96% coverage at can-I-use https://caniuse.com/?search=Server%20sent%20event
- fetch api to send requests up 93% coverage at can-I-use https://caniuse.com/mdn-api_fetch and there seems to be also a polyfill if one really!? want/need more browser support (e.g. IE10) https://github.com/github/fetch
Details:
Articles like this (linked in another answer) are wrong about this aspect of HTTP/2. They say it's not bidi. Look, there is one thing that can't happen with HTTP/2: After the connection is opened, the server can't initiate a regular stream, only a push stream. But once the client opens a stream by sending a request, both sides can send DATA frames across a persistent socket at any time - full bidi.
That's not much different from websockets: the client has to initiate a websocket upgrade request before the server can send data across, too.
...
If you need to build a real-time chat app, let's say, where you need to broadcast new chat messages to all the clients in the chat room that have open connections, you can (and probably should) do this without websockets.
You would use Server-Sent Events to push messages down and the Fetch api to send requests up. Server-Sent Events (SSE) is a little-known but well supported API that exposes a message-oriented server-to-client stream. Although it doesn't look like it to the client JavaScript, under the hood your browser (if it supports HTTP/2) will reuse a single TCP connection to multiplex all of those messages. There is no efficiency loss and in fact it's a gain over websockets because all the other requests on your page are also sharing that same TCP connection. Need multiple streams? Open multiple EventSources! They'll be automatically multiplexed for you.
this and more details in: https://stackoverflow.com/a/42465368
edit:
if this can really be confirmed and it works like this, I was exactly right yesterday with my casual comment "could it be matched by others protocols and advanced configurations" :-)
-> at first, the client has to open the stream, similar to how the client has to initiate the WS upgrade...
(so you could/should keep even the WS indicator badge on your demo, it only has to be renamed to: stream open)
It would be very interesting if we can do bidirectional streaming with http2 (fetch push from client, event source push from server or streaming response to the previous fetch)
I'm following the demo on https://web.dev/fetch-upload-streaming. The demo used TextDecoderStream and stream.pipeThrough() which seems not supported by majority distribution of firefox but it seems possible with other approaches.
If it really works, the performance will be improved and security part would be easier to cater!
Update: I cannot get client-to-server stream work with http1/http2 yet, the body sent from Firefox and Chrome appear to be stringified object [object ReadableStream], not the actual stream content.
Maybe it need to be multiple ajax instead of a single streaming ajax at the moment?
hmm does this work?
After the connection is opened, the server can't initiate a regular stream, only a push stream. But once the client opens a stream by sending a request...
could you open a stream after the connection is established starting from the client site to the server?
some more background on http/2 bidi streaming: https://web.dev/performance-http2/#streams-messages-and-frames