tessellate icon indicating copy to clipboard operation
tessellate copied to clipboard

Proposal to use pre-rendered content in tessellate-fragment

Open semonte opened this issue 7 years ago • 3 comments

Currently when the tessellate-fragment gets a request, it

  • fetches sources (HTTP)
  • fetches bundle (HTTP)
  • fetches content (HTTP)
  • renders the fragment by loading the bundle (js) and content (JSON) in server memory and uses node virtual machine to run the bundle script.

This requires resources from the server and the same steps are taken for every single request. When there are multiple requests per second, this does not scale.

A proposal to resolve this is to have a pre-rendered fragment storage. tessellate-fragment could (by configuration) look up for a pre-rendered fragment by the URL and language key. If there is a hit, the HTML can be sent immediately to the client skipping all the above mentioned steps. For popular URLs the performance gains are significant and clients can enjoy faster response times.

semonte avatar Mar 16 '17 09:03 semonte

@mfellner Would really like to hear your ideas regarding this.

semonte avatar Mar 16 '17 09:03 semonte

This is certainly a useful optimisation we should implement. Just to be clear: we're simply talking about caching here. Where "simply" means it's actually really complicated, since it's caching.

First let's look at all the steps where we might add a cache:

  1. fetch resources for rendering
    • we can use ETags for caching
    • we can use a faster resource store
  2. render HTML from resources and context
    • we can cache the rendered html

Intuitively 1 should be slower since the network is usually slower than CPU time.

In that case we can optimise by adding ETag based caching to tessellate-request. The cache itself should probably be in memory or on disk - ideally we'd use a tiered cache with two layers (a networked cache doesn't make sense). Maybe node-cache-manager could help for example. Another idea would be to make the storage at least for some resources (e.g. bundles) faster. E.g. by using Redis or by moving it onto the same physical node as the fragment-service.

Now, server-side rendering with React is slow too. Instead of caching rendered results we could also look into alternative, faster renderers. But caching is probably the bigger win. There are some things to consider here:

  • rendering uses multiple inputs:
    • bundle.js
    • content.json
    • HTTP request context
    • potentially data fetched by React components
  • cache-invalidation can happen in two ways:
    • client-based (fragment client needs to send invalidation information)
    • server-based (fragment checks for new resources and re-renders or not)
  • cache can be multi-tiered:
    • memory
    • disk
    • redis
  • cache must be thread-safe and scalable
    • now that the fragment runs multiple processes, we might write to the cache on two different processes in parallel
    • memory/disk would be shared by processes on 1 node, redis would be shared by multiple nodes

Those are just my initial thoughts, maybe I forgot something important. What is everyone else's opinion?

mfellner avatar Mar 17 '17 12:03 mfellner

  1. render HTML from resources and context * we can cache the rendered html

The initial proposal is about caching just the rendered content. Caching bundles, content and sources adds more complexity while providing small(er) performance benefits. A big bottleneck is the rendering phase.

I don't think we need to decide which cache will be implemented, caching capability could be activated by a configuration. For us https://aws.amazon.com/elasticache/ seems like a good choice, but this should not be hardcoded into the application code.

Maybe some code clears my thoughts:

return router.get('fragment', '/fragment', async ctx => {
    if(cacheOn) {
      html = getCacheProvider().getOrCreate(headers, query)
    } else {
      html = renderTheSlowWay(headers, query);
    }
  ...
  });

CacheProvider would be an interface for the cache. Deployed services can then use whatever cache fits them.

semonte avatar Mar 17 '17 14:03 semonte