fastboot-app-server icon indicating copy to clipboard operation
fastboot-app-server copied to clipboard

Slow response times on Heroku

Open allthesignals opened this issue 5 years ago • 2 comments

image

Hi there – I'm looking for some guidance on improving this.

One of the benefits of fastboot is faster response times, but I've seen some routes take almost 7 seconds to load from the server. Is this not a fastboot issue? I am using this app server as such:

// frontend-server.js

const FastBootAppServer = require('fastboot-app-server');

const server = new FastBootAppServer({
  distPath: 'dist',
  gzip: true, // Optional - Enables gzip compression.
  host: '0.0.0.0', // Optional - Sets the host the server listens on.
  chunkedResponse: true, // Optional - Opt-in to chunked transfer encoding, transferring the head, body and potential shoeboxes in separate chunks. Chunked transfer encoding should have a positive effect in particular when the app transfers a lot of data in the shoebox.
});

server.start();

My Procfile:

web: node --optimize_for_size --max_old_space_size=460 --gc_interval=100 frontend-server.js

There are a lot of variables that I'm struggling to isolate. It could be my application code leaking memory, an addon leaking memory, the heroku configuration lacking the correct size, the particular buildpack I'm using, the way I've setup my FastBootAppServer initialization, or the other tooling I'd need.

I'm able to get this to work reasonably well locally, but I am spinning my wheels on the deployment step.

allthesignals avatar Jul 16 '19 20:07 allthesignals

Yeah, with such high memory usage (and swap usage in particular) response time will definitely skyrocket. It could be a leak that manifests itself only in FastBoot, but I'd check how's the usage in the browser first.

Here's the memory usage of my production FastBoot app (with similar FastBootAppServer and node options) for comparison:

One of the benefits of fastboot is faster response times

Unless you do any kind of caching, FastBoot doesn't make responses any faster. Slower, if anything, because it puts one more stop between the browser and your backend server.

CvX avatar Jul 17 '19 09:07 CvX

@CvX thank you for that context! It's good to see an example of typical memory usage with this particular brew of technologies.

One follow-up question I have is: what are the pros and cons of setting up a CDN for the build output, using the s3 notifier and the downloader for fastboot app server? Faster deployment turnaround? If your app is serving many different parts of the world (mine is only the NYC area), perhaps CDNs make sense there because they serve assets more proximately to the requesting client?

And yes, my assumption about what fastboot is getting me w/r/t response time is wrong – it's good to know, though, what fastboot does and doesn't cover. I've heard people use Varnish for caching? I think what needs to happen is I need to isolate memory leak and performance issues I'm seeing in my app on the frontend. That has been a labyrinthine experience but maybe with more time on it I'll narrow it down.

I'd check how's the usage in the browser first

What kind of approach to this would you take? Heap snapshots over tests? Running a performance profiler, improving component load, then benchmarking against more heap snapshots? I've been working through this but the process feels a bit like reading tea leaves (bc I'm not good at it yet).

Random thoughts:

  • I wonder if kicking back slower components to the client would help matters (seems hacky)
  • My app has hundreds of thousands of possible dynamic route segments, so there will always be a slower-than-usual initial load even with caching. There's nothing around that short of improving matters by isolating and improving slow components OR doing some wild pre-rendering with prember.
  • Thanks for being a sounding board!

allthesignals avatar Jul 17 '19 14:07 allthesignals