drops-8
drops-8 copied to clipboard
Add support for BigPipe
BigPipe was added to Drupal 8.1 core but is currently not supported by Pantheon.
Drops-8 and Pantheon Varnish are compatible with BigPipe. Pantheon's edge routing layer is still buffering output, though. This issue is being tracked internally.
It has been six months since the last update, any news?
Any update in the last 9 months?
BigPipe is officially stable in Drupal 8.3.0. We also have the new Sessionless BigPipe module, which also requires unbuffered output in our edge routing layer to work.
These modules are important & should be supported on Pantheon. We do not have an announced support date for this work yet, though.
While Drupal sees major gains from the current implementations of BigPipe, they're actually kind of mediocre compared to what's possible, and that's where my mind is at.
It also won't be very long before HTTP/2 is available platform-wide (cough already running on some sites cough and available to all others if you throw Cloudflare or Fastly in front), which actually does what BigPipe wants (server push of additional assets) in a semantic way rather than a clever abuse of HTTP/1.1 chunking and streaming.
This approach should win against HTTP/1.1 BigPipe streaming several times over:
- HTTP/2 allows each asset to have its own cache control at the edge (Varnish), rather than having Drupal stream things out of its object cache.
- These additional assets can come from a CDN POP near the user rather than from the origin.
- Even if you use ESI to stream the assets out of something like Varnish (and avoid Drupal having to stream them), ESI must streamed them sequentially over one connection. So, any asset that misses the cache will delay loading of later ones. HTTP/2 push happens in parallel, so assets become visible without any cache miss bottlenecking the others. Moreover, the browser can actually cache these assets on the client side, something opaque with ESI.
- Other assets, like CSS, JS, and images, can also be pushed to the client before the browser even knows it need them. This isn't a BigPipe thing, but it's more commonly the bottleneck on initial page loads than Drupal's page skeleton delivery (though accelerating the skeleton page delivery does reduce the time before the browser knows it needs to request those assets with HTTP/1.1).
In short, BigPipe with HTTP/1.1 streaming has put a lot of good infrastructure into Drupal's core, but it's a mediocre way to optimize, whether your goal is (1) time to basic page usability, (2) time to most content being available, or (3) time to pages rendering on further interaction with the site.
We should think about how to "do it right" for logged-in users, which is the big benefit of BigPipe. As it stands, our Varnish implementation gives a big ol "nope" when there's an active session.
One good advantage of bigpipe over http2 is current browser/older browser support. https://caniuse.com/#search=http2 I can see why pantheon would like to support http2 instead of bigpipe only this decision would de-facto not support a chunk of internet users.
USA numbers as of May 31, 2007 84.5% fully supported + 10.89% partially supported. (source: http://caniuse.com/) and probably the standard deviation of +/- 5% of any survey based data gathering.
Now that BigPipe is enabled by default on Drupal 8.5, is that something we should manually disable on Pantheon sites, or can it remain enabled without causing problems?
I disable BigPipe when running the Behat tests on the various drops-8 modules, as BigPipe interferes with Behat's ability to follow the redirects that happen on progress bar pages.
For other uses, leaving BigPipe on should not cause any adverse effects. However, since it will not provide any benefit either, you might as well uninstall the module on your Pantheon sites.
@davidstrauss In your 2017 comment you wrote:
It also won't be very long before HTTP/2 is available platform-wide (cough already running on some sites cough and available to all others if you throw Cloudflare or Fastly in front), which actually does what BigPipe wants (server push of additional assets) in a semantic way rather than a clever abuse of HTTP/1.1 chunking and streaming.
This is inaccurate.
BigPipe in Drupal is not designed for making assets load faster.
It's designed to allow Drupal to serve already-rendered content immediately to the browser, and while the browser already renders that (and fetches CSS, JS, images …), Drupal can simultaneously continue rendering of HTML (and doing whatever computing/data fetching necessary for it) on the server. That is why it uses chunked transfers: to send HTML content as fast to the client as possible, not assets.
(Also, by now, HTTP/2 Server Push is effectively dead, because it didn't actually end up working in the real world. But just like you, I was hoping that it would make all the difference for assets, and we'd have been able to remove complex bundling/aggregation logic in Drupal! Chrome removed it in 2020.)
Fast forward to 2023, and PHP 8.1 is required by Drupal 10 and comes with Fibers. An issue to update BigPipe in Drupal core to use Fibers is RTBC.
This will mean that rather than rendering that placeholdered content in sequence, it can be rendered in parallel on the server (well, they're still rendered sequentially but the I/O blocking that rendering can happen in parallel).
Time to reconsider? 😊
@davidstrauss
rather than having Drupal stream things out of its object cache. These additional assets can come from a CDN POP near the user rather than from the origin.
Just to follow-up on Wim's comment. This is just not how Drupal core works at all. Drupal's assets are always served from the filesystem, they're never streamed from the object cache. BigPipe slightly modifies when particular script or stylesheet link tags might get rendered but it doesn't fundamentally change core's asset rendering logic at all.
The Fibers patch, once we've got some extra pieces in place, will mean mean that even with edge cache buffering, there is likely to still be some benefit to sites from running the BigPipe module.
For an over-simplified example, if you have four blocks that each do something (http request, database query) that can be handled async and take 3 seconds each, instead of 12 seconds to get through each sequentially, could potentially be done in parallel in 3 seconds. That would allow the page to be served 9 seconds faster even with BigPipe's streaming logic being hamstrung by Pantheon. Actual BigPipe would take the ttfb down closer to milliseconds with or without the Fibers patch, but it makes https://github.com/pantheon-systems/documentation/pull/3596 even less correct once we get to Drupal 10.2 or 10.3 even without supporting it properly. edited to add: it's possible we'll eventually support Fibers in the non-bigpipe rendering implementations too, but for now that was the obvious place to start.
We should think about how to "do it right" for logged-in users, which is the big benefit of BigPipe.
This is key for us. Our move to Pantheon has noticeably slowed down the perceived speed of loading of pages for authenticated users - I think because we no longer get the advantage of BigPipe for them. (Noticeably in the sense that users have complained...!). It would be great if there was a solution for logged in users that leveraged all this existing cleverness to lower TTFB and FCP.
We want to be able to defer the loading of slow content blocks while maintaining page cache.
@stephencapellic https://www.drupal.org/project/big_pipe_sessionless supports that.