TheGame icon indicating copy to clipboard operation
TheGame copied to clipboard

Cache correctly-sourced DAOHaus membership information

Open dysbulic opened this issue 3 years ago • 6 comments

What would you like to be added?

Currently, DAOHaus memberships are associated with the player record by way of a remote schema.

This means that every time a player record is loaded, The Graph is queried three times for the Polygon, xDAI, and mainnet DAO memberships.

This is not only time consuming, as of a release several months ago, DAOHaus is no longer putting the name of the DAO on chain. It is stored in an extensive metadata file available elsewhere.

A stale-while-revalidating cache to be triggered with the same logic as the Ceramic profile cache (dirty a user when their profile details page is loaded, or every four days at the longest) would likely work.

Why is this needed?

Page loads are currently abysmally slow.

dysbulic avatar Dec 27 '21 05:12 dysbulic

@dysbulic Should I work on this based off of the glaze PR branch?

alalonde avatar Dec 29 '21 15:12 alalonde

@dysbulic Should I work on this based off of the glaze PR branch?

I just saw this comment. Yes. We rebased yesterday and it would be nice not to have to do it again.

dysbulic avatar Jan 05 '22 13:01 dysbulic

Following issue is part of this #646

lucidcyborg avatar Jan 05 '22 15:01 lucidcyborg

Did some research into this, I see a few approaches:

  1. Implement a one-off caching mechanism for this particular remote fetch. This would involve adding additional tables, e.g. dao and dao_player, to populate as a cache when fetching from daohaus. We would then return this cache for most player graphql fetches, and only update the cache from daohaus when a player's page is explicitly fetched, or every 4 days using a cron trigger.
  2. Look into a general-purpose caching mechanism as suggested here. This would likely involve spinning up a couple additional backend services (e.g. nginx, redis) which would add additional complexity to our deployment but enable more flexible caching for any of our data moving forward.

Were I to move forward with the first option, I would want to first create the additional tables, including the changes suggested in #1142.

alalonde avatar Feb 19 '22 18:02 alalonde

@alalonde, what would adding additional systems buy us?

At a glance, I'd say stick to Postgres. What columns would you be planning to define in the dao table?

dysbulic avatar Feb 21 '22 12:02 dysbulic

Started digging into this today. My plan:

  1. On the /players page, fetch from our own tables (dao_player and guild_player).
  2. For the existing daohausMemberships field, whenever that is queried, cache the results in the dao_player table. For now, we can still use this field / remote schema. Loading a specific player will still incur the performance penalty of hitting Daohaus.
  3. It appears that Hasura cloud finally supports remote schema caching, so that is an option to improve said performance penalty for individual player pages. I wonder if we could alternatively periodically generate the player pages on the server-side? So that we're not explicitly fetching this data from the client

alalonde avatar Jul 19 '22 04:07 alalonde