graphql-engine icon indicating copy to clipboard operation
graphql-engine copied to clipboard

Hasura "node limit" is being calculated incorrectly

Open arcticmatt opened this issue 2 years ago • 7 comments

Version Information

Server Version: v2.6.2-cloud.1

Environment

Cloud

What is the expected behaviour?

I expect the node limit to be based on the number of nodes in the response.

For more info on node limits, see https://hasura.io/docs/latest/graphql/cloud/security/api-limits/#node-limits

Keywords

Security, node limit

What is the current behaviour?

When I set the node limit to 200, one of my queries starts failing, even though the payload contains far fewer than 200 nodes.

My best guess is that the query contains a lot of fragments, and those fragments often query for the same nodes. And it appears as if the nodes are being double counted.

This is not the desired behavior, because if multiple fragments query for the same node, it doesn't make the query any more expensive.

arcticmatt avatar Jun 02 '22 22:06 arcticmatt

Hi @arcticmatt , is it possible for you to include the query that's failing (with node limit 200)? Or a similar, example one that can be used to reproduce the issue?

ecthiender avatar Jun 06 '22 06:06 ecthiender

My best guess is that the query contains a lot of fragments, and those fragments often query for the same nodes. And it appears as if the nodes are being double counted.

This is not the desired behavior, because if multiple fragments query for the same node, it doesn't make the query any more expensive.

Well, depends on the implementation. In Hasura, same nodes will be counted as many times as they appear in the query.

tirumaraiselvan avatar Jun 06 '22 08:06 tirumaraiselvan

We were able to reproduce the issue with the query below. As you can see, we have 3 overlapping fragments. We expect the node count to be 4, as each type is a node (scalar fields are not counted as nodes). Instead, the node count is 10 as each type field in each fragment is counted as a node.

query MyQuery {
  Users {
    ...User_Fragment
    ...User_Fragment_2
    ...User_Fragment_3
  }
}

fragment User_Fragment on Users {
  User {
    User {
      User {
        id
      }
    }
  }
}

fragment User_Fragment_2 on Users{
  User {
    User {
      User {
        id
      }
    }
  }
}

fragment User_Fragment_3 on Users{
  User {
    User {
      User {
        id
      }
    }
  }
}

tiafouroohi avatar Jun 07 '22 20:06 tiafouroohi

Well, depends on the implementation. In Hasura, same nodes will be counted as many times as they appear in the query.

@tirumaraiselvan why would you want to do that? If you have 10 fragments, but they all query for the same fields, that's just as expensive as if you only had 1 fragment

arcticmatt avatar Jun 08 '22 17:06 arcticmatt

Thanks for the example @tiafouroohi . And apologies for the delay. Looks like it is indeed a bug, but this is also a super edge-case. I can't imagine a scenario where one would define the same selection set with different fragment names in practice. Let me know if my assessment is incorrect.

Said that, we have added this to our backlog, but currently has low priority as we don't deem it to be a severe bug. We'll update here once a fix is released.

cc @tirumaraiselvan

ecthiender avatar Jun 17 '22 09:06 ecthiender

@ecthiender I don't think it's an edge case, this is a really common scenario if you use Relay. And Relay is one of the more widely used GraphQL clients

For example, we may have a fragment in Component1 like so:

const fragment = graphql`
  fragment Component1_MetadataAccount on MetadataAccount {
    mint
  }
`;

And then we may have another fragment in Component2 like so:

const fragment = graphql`
  fragment Component2_MetadataAccount on MetadataAccount {
    mint
  }
`;

These components access the same data, but use different fragments. See https://relay.dev/docs/guided-tour/rendering/fragments/ for more info about this

Note that this issue is rather important, as it forces us to set the node limit excessively high, which affects security

cc @tirumaraiselvan

arcticmatt avatar Aug 24 '22 15:08 arcticmatt

@arcticmatt much apologies for the late reply. We have picked up the issue and are working on a fix. Once the fix is merged and released, I'll update back in the thread :)

Thanks again for reporting the issue and for your patience :heart:

Naveenaidu avatar Oct 09 '22 15:10 Naveenaidu

@arcticmatt thank you very much for your patience.

After spending some time working on this issue we acknowledge your expectations with node limit calculation in the presence of fragments but because of the way inlining works in Hasura, we will need to hold off on this fix till we have refactored our inlining computations - which is a significant effort.

Naveenaidu avatar Oct 28 '22 06:10 Naveenaidu