graphql-query-complexity
graphql-query-complexity copied to clipboard
Feature Request: Support adding cost on types
Hi. I was wondering if it would make sense to add support for adding cost on the graphql types like this:
type Customer @complexity(value: 5){
name: String
}
type Employee @complexity(value: 10){
job: String
}
So for example in the next query, we can specify different complexities based on the type:
query MyQuery {
People() {
entities(first: 10) {
nodes {
... on Customer {
name
}
... on Employee {
job
}
}
}
}
}
In the previous example, the cost would be the max complexity of both types, in this case 10.
But for example if I do the query for only Customer like this:
query MyQuery {
People() {
entities(first: 10) {
nodes {
... on Customer {
name
}
}
}
}
}
The complexity for this query would be 5 instead.
Similar implementation as in this library: graphql-cost-analysis.
I am happy to contribute with a PR, but I wanted to get first your opinion.
Thanks.
I am not sure if the directive estimator would be the right approach for that. I see some challenges with polymorphic types and calculating the complexity when there are multiple competing complexities on a field.
In your example, with the default logic of the GraphQL library, the object might also be resolved internally (probably consuming resources), it would just return an empty object for Employee
.
I would probably create a custom estimator for that with a type map and complexities. This estimator can then automatically assign the complexities for each field. If you have something that is generic, I'm happy to merge a PR if you want to add this to the library, as this might have come up before if I remember correctly.
Something like:
const estimator = typeComplexityEstimator({
complexities: {
Employee: 10,
Customer: 5,
// ... more types
}
})
Or could also be implemented as a directive estimator like you mentioned in the example. But I would keep this separate to not mix up defining complexities on fields and types. Users can then decide for themselves where in the estimator chain the type complexity sits by changing the order.
Some things we should clarify:
- Can you also define complexities on union / interface types
- How are they calculated if there are competing complexities (one type that implements multiple interfaces with overlapping fields, etc.)
- How are fields with non-named-types handled? For example multi dimensional arrays.
Thanks a lot for your reply.
I was having a look to the custom estimator solution. We would need a way for telling the number of items that we are querying. For example on this query:
query MyQuery {
People() {
entities(first: 10) {
nodes {
... on Customer {
name
}
}
}
}
}
We can specify that the complexity for Customer is 5, but not sure how we would have into account that we are querying for 10 items based on the input first: 10
. I cannot see any way of doing this without adding a directive on the entities field telling the estimator to use the first
field as a multiplier. But If we do this we would again mixing complexities on fields and types right?
Regarding your questions about the directive approach:
Can you also define complexities on union / interface types
I don't see why not. But if we have conflicts between complexities, I would follow the same approach as in this library and pick up the max complexity.
How are they calculated if there are competing complexities (one type that implements multiple interfaces with overlapping fields, etc.)
Similar as before, I would pick the max value in case of conflict. I guess that is the safest approach, but I am open to suggestions.
How are fields with non-named-types handled? For example multi dimensional arrays.
Not sure about this one. Would be possible to relay on another estimator for this cases?
Thanks again!