graphql-engine icon indicating copy to clipboard operation
graphql-engine copied to clipboard

Memory rise on `hasura metadata apply`

Open g-jgreen opened this issue 3 months ago • 2 comments

Version Information

Server Version: 2.48.0 CLI Version (for CLI related issue): version=2.48.0

Environment

EE

What is the current behaviour?

When applying metadata with permissions and allow-rules, we see a significant rise in memory for the Hasura instance.

The largest files/directories that are imported are:

  • Our query_collections file, around 2.5MB of size:
2.5M Sep 24 17:36 query_collections.yaml
  • Our tables consist of 138 yaml files with a total of 1368K (1.37MB)
➜  tables ls -la
total 1368

Yet importing this enlarges the memory footprint at times to 6.5GB coming from a baseline of around 2GB. We can ofcourse foresee enough memory, though it seems rather unusual to require this much memory during this action.

Similar to this report: https://github.com/hasura/graphql-engine/issues/10601#issuecomment-2491657246

What is the expected behaviour?

An fairly normal memory footprint considering the size of the metadata files.

How to reproduce the issue?

  1. Have tables & permissions combined with an allow list of queries
  2. run metadata apply
  3. Memory rise

Screenshots or Screencast

Base footprint: Image

Footprint during import: Image

Please provide any traces or logs that could help here.

Standard logging, nothing unusual here.

Any possible solutions/workarounds you're aware of?

None

Keywords

Hasura CLI Memory

g-jgreen avatar Sep 24 '25 15:09 g-jgreen

The spike isn’t directly correlated to the raw size of the YAML files, but to how the engine expands and validates the permissions, allow-lists, and query collections when applying metadata. That expansion step can cause a short-lived but very noticeable increase in memory usage.

A couple of notes:

  • The baseline you’re describing (2 GB → 6.5 GB) does sound high relative to the file sizes, but it’s not unusual that metadata apply needs a multiple of the base footprint during validation.
  • We don’t have a knob you can turn today to limit that, and there isn’t a functional workaround beyond making sure the instance has enough headroom to handle the spike.

I'll pass this onto the team internally, though.

robertjdominguez avatar Sep 30 '25 18:09 robertjdominguez

Noticed a similar issue, also #9988. It's quite surprising since our schema is not huge.

jerr0328 avatar Nov 07 '25 12:11 jerr0328