Mlem icon indicating copy to clipboard operation
Mlem copied to clipboard

Faster image loading

Open tht7 opened this issue 1 year ago • 3 comments

Checklist

  • [x] I have described what this PR contains

Choose one of the following two options:

    • [x] This PR does not introduce major changes
    • [ ] This PR introduces major changes, and I have consulted @buresdv, @jboi or @mormaer in the Mlem Development Matrix room

Choose one of the following two options:

    • [ ] This PR does not change the UI in any way
    • [x] This PR adds new UI elements / changes the UI, and I have attached pictures or videos of the new / changed UI elements

Pull Request Information

About this Pull Request

  • made the feed pre-load a lot more posts (50 which seems to be the limit on almost all lemmy servers)
  • started loading the next page of a feed 25 posts before the user reaches the end of the feed
  • created a new app-wide cache of up to 500MiB (in memory and on disk)
  • when loading new posts added a function to pre=load all images into cache in the background (so the user doesn't experience the annoying image loading jitters)
  • added a setting in the app to see how much space the cache is taking and allowing the user to clear it

All these changes made scrolling in the app a lot smoother! (but from my testing I think we should change point number 2 to start loading new posts 49 posts in advance, from my testing we start out by loading a few pages and it gives the user a much much smoother experience)

Screenshots and Videos

Added a new button in the settings image

Additional Context

Any additional context you'd like to add to help us review this PR

tht7 avatar Jun 21 '23 15:06 tht7

Thanks for another great contribution 👍

I'll try and get a review done on this later - definitely something we should be looking to integrate, my only immediate hesitation is bumping the feed limit up as high as 50, at the moment I think we usually get ~10, so a smaller bump (plus your image pre-loading) would likely give much better scrolling UX without as much strain on slower connections?

mormaer avatar Jun 21 '23 16:06 mormaer

Something to consider is to load a smaller amount of posts on the first load and (pre-)load a larger amount after that. This is the way Apollo does it. I think it's a pretty smart way of doing this, since you get the first few posts quickly which makes everything more responsive. You also get the benefit of not having to send as many API requests. Then you can also decide if you want to immediately pre-load the next set of posts, so it has a bit of time to fetch those on a slower connection.

Emiliaaah avatar Jun 21 '23 20:06 Emiliaaah

I completely understand your concerns

But I have a few counterpoints (and hopefully some receipts to back them up 😉)

You’re correct that under stricter network conditions loading will be slightly longer but the time per post is significantly improved which leads to a way nicer experience in the end Plus, I start loading the next batch of posts way before you reach the actual end of the feed giving the network enough time to catch up

Here are a few measurements that I’ve captured:

Very good network conditions: 10 posts loaded in 1.350360666 seconds (0.1350360666s per post) 50 posts loaded in 0.538415459 seconds (0.01076830918s per post) 50 posts loaded in 0.63201625 seconds (0.012640325s per post) 50 posts loaded in 0.466469417 seconds (0.00932938834s per post) 50 posts loaded in 0.411641667 seconds (0.00823283334s per post) 50 posts loaded in 1.064755458 seconds (0.02129510916s per post) 50 posts loaded in 1.038863958 seconds (0.02077727916s per post) 50 posts loaded in 0.482592583 seconds (0.00965185166s per post) (in this capture I wanted to illustrate that under reasonable network conditions the batch size makes almost no difference in the loading time, but it does make a huge user experience difference because we simply need to load a lot less batches)

3G: 10 posts loaded in 1.925274167 seconds (0.1925274167s per post) 50 posts loaded in 1.76143075 seconds (0.035228615s per post) 50 posts loaded in 2.635654958 seconds (0.05271309916s per post) 50 posts loaded in 5.396175124999999 seconds (0.1079235025s per post) 50 posts loaded in 1.694496958 seconds (0.03388993916s per post) 50 posts loaded in 2.154538083 seconds (0.04309076166s per post) 50 posts loaded in 5.186091708999999 seconds (0.1037218342s per post) 50 posts loaded in 5.500505959000001 seconds (0.1100101192s per post) 50 posts loaded in 4.257322167 seconds (0.08514644334s per post) (here we do see that loading bigger batches usually take longer but not per post so when the loading is done you have more to scroll through before you need to load again, and because the batch size here is the largest we can make it, I also start preloading a lot sooner which leads to silky, smooth scrolling, even under kind of terrible network conditions)

Personal note from my personal experience: In my profession, I am a backend engineer, and I have been a backend engineer for about five years now (maybe even more, but who counts)

I have grown this approach of ”data generosity” and whenever I interact with the frontend engineers in my company, I encourage them to ask larger batches if they know they’re gonna use this data

Every HTTP request has its overheads, TSL, SSL, authentication, pinging the database, and if the service is written in another language, there’s also parsing and formatting the responses

By asking for larger batches of data the server needs to perform a lot less of these menial tasks and we can focus on what’s really important: the data itself!

I urge you to clone this branch and give it a try because it’s a transformative experience how smooth and nice the feed scrolls the larger the data batches get,

It is so nice that I consider Mlem from TestFlight without this patch, barely usable, and when I run Mlem for personal consumption, I crank all the numbers up to 50 all of the time to preload as much as possible

Of course, I leave the decision in your hands, and I’ve even experimented with making the first batch smaller (back to the original 10 posts) only to find it doesn’t make any significant difference, except it does slow things down a bit

tht7 avatar Jun 22 '23 04:06 tht7