website
website copied to clipboard
Faster page loading
With adding the new features view, I added a bunch of large images to the main page. When someone opens the website, he can only see a maximum of four graphics, but the browser loads 13 and the video. Furthermore the images got a little bigger...
This is quite annoying. And a fast page loading is important to Google and other search engines.
There are different methods to solve this problem:
1. Load hidden images when page loading is complete
Not that good for people with little data volume (on mobiles), but it's better for than nothing and ensures that all images are ready when they are needed.
2. Load images when slides get opened
Means that the images will be loaded later, which could causes some page jumping. The images are at the top (or left side) of the page when a slide is opened. To prevent this, we'd need placeholders.
3. Resize images for main page
Means that we need to keep both images (full size for the press kit), but as we could resize the images to the half of their current width, this is also effective and good for the viewer and causes less traffic.
4. Lazy loading
Loads images when they are up to be in the viewport. This make no sense since the features are almost at the top of the page.
I'd prefer to implement a combination of 1. and 3. What do you think @theScrabi and @TheAssassin?
Sounds good. You could try to reduce the quality of the pictures by using a lossy format with a higher than average compression factor, e.g. JPEG with a 75% quality setting.
We should also prevent the mobile version to load all images, because they only get displayed in an "separate" view. I'll do this next week.
If the responsive layout doesn't show the images (display: none
etc.), the pics aren't loaded IIRC. I think that can be fixed with just some CSS-fu.
I just tested it with latest FF and Chromium but both loaded images no matter that either the image nor its container were hidden.
Maybe also statically GZIp/Brotli compress the files, depending on how they are hosted…
Also you do not use HTTP/2, this makes a real performance difference.
Yes, we definitely have to use GZip. @TheAssassin When you have time, can you please take a look at it.
I also thought about merging some of the JS scripts to reduce the number of requests. But my time is quite limited right now, so I can only focus on design improvements and new features or optimizations. In other words: help is always welcome :)
I also thought about merging some of the JS scripts to reduce the number of requests
This is unnecessary when you set up HTTP/2. Then often many small files even load faster than one big file.
Also you do not use HTTP/2, this makes a real performance difference.
Unfortunately, some other websites I maintain cannot support it yet. Believe it or not, but some outdated distros and some outdated other OSes don't support it, even with a modern browser. Every time I tried to activate it secretly, those people started complaining.
nginx should auto-compress the files already on the fly, by the way.
HTTP/2 is backwards compatible, so when the client does not support it, it should fall back to HTTP 2.
Also, AFAIK http2
can be configured per domain in nginx, so if needed just only enable it for this sitr.
The only thing, which may be outdated is the server software. :wink:
http2 can be configured per domain in nginx
That's unfortunately not possible. One can configure it per port, but not per domain. This is due to a technical restriction in the HTTP/2 protocol. If there was something like SNI (which allows per-domain certs in TLS), then this might be possible.
The main problem is an TLS protocol version issue. HTTP/2 supports only a couple of TLS configurations. If outdated clients access the page, they get presented that list, and then show cryptic errors (not even "TLS configuration not supported" or something that could help you debug it, but something totally unrelated, IIRC).
If it was my decision only, then I would immediately make the switch. HTTPS/2 is slightly faster, IMO, too.
I hate supporting insecure legacy systems...
HTTP/2 supports only a couple of TLS configurations.
Likely not. Never heard such a thing. Your TLS config should anyway not be outdated, but that's another point.
I hate supporting insecure legacy systems...
Don't do it. :wink: Basically, maybe that Newpipe site needs to be decoupled from that legacy stuff if that support is needed.
HTTP/2 doesn't permit some known-insecure (but not critically insecure) TLS configurations. Those are of course not the defaults. Those other people I mentioned did find issues on older Linux distros (and some other OSes, I don't have the list at hand, though).
Basically, maybe that Newpipe site needs to be decoupled from that legacy stuff if that support is needed.
Sure. Pay me monies, and I'll get more infrastructure.
No, seriously: I will consider upgrading to HTTP/2 again. I don't like to break OWASP A9 (using known vulnerable components) permanently. But as my time is limited, please be patient.
HTTP/2 doesn't permit some known-insecure (but not critically insecure) TLS configurations.
Can you provide a reference for that? I'd still only say, that it just requires HTTPS.
But as my time is limited, please be patient.
I'll try to find it again in my huge rubbish heap of links.
I just found compress.html. In a small test it reduced the size of all HTML files from 430kB to 329,4kB. Just by removing white spaces on our main page, it shrink from 40.5kB to 23,5kB.
I'm not sure whether we should minimize everything, but the Jekyll Assets plugin sounds also promising. It can compress JS, CSS and images. Although it'd require a complete refactor, I think it worth the effort. The plugin would also enable us to use responsive images easily which reduces the file sizes of images further. I didn't tested it yet because there are other things to first. But if you agree, @TheAssassin, I can implement compress.html.
@TobiGr I don't think it'll make a huge difference. With the gzip compression the nginx server applies to the HTTP stream whitespace shouldn't be an issue. It'd only increase efficiency by removing stuff like comments etc., or unused/unnecessary tags.
Compressing our JavaScript and CSS code makes sense, too. If it isn't too much work for you, we could use this.
Although, to be honest, we can just apply such optimizations with easier-to-use CLI tools in the Docker file. That's IMO a lot easier to implement than fiddling with Jekyll plugins. But it's up to you to make these decisions.
It'd only increase efficiency by removing stuff like comments etc., or unused/unnecessary tags.
That happens too.
Although, to be honest, we can just apply such optimizations with easier-to-use CLI tools in the Docker file. That's IMO a lot easier to implement than fiddling with Jekyll plugins. But it's up to you to make these decisions.
Yes, I'd prefer this, feel free to do this, when you find time for it.
Sure thing. Do you have suggestions? Otherwise, I'll try the tools I used to use a few years ago.
Which tools did you use?
The common ones, like e.g., https://github.com/mishoo/UglifyJS. I don't recall what I used for CSS, though.
I've worked with UglifyJS too, so this sounds good to me. What do you think of https://github.com/ben-eb/cssnano and https://github.com/css/csso?
I don't think there's a big difference between both tools. I'd prefer the one with less dependencies, so that the container build isn't slowed down too much.