Hasura caching issue when using remote schema and graphql mesh
Version Information
Server Version: 2.41.0
Environment
Cloud
What is the current behaviour?
I am working with remote-schemas for which I am using graphql-mesh and Kotlin as BE with openapi.json I created a test endpoint on Kotlin BE, for which I am sending a mutation request from Hasura. This is the request
mutation MyMutation ($input: CacheTestRequestInput_gbo) { backoffice { performCachingTest(input: $input) { ... on CacheTestResponse_gbo { __typename body } } } }
For the above request, I am sending various values on input parameters
{ "input":{"body":"apple1", "id":"1", "count": "ABCD"} }
And this is the response I get
{ "data": { "backoffice": { "performCachingTest": { "__typename": "CacheTestResponse_gbo", "body": "id=[1], body=[apple1], count=[ABCD]" } } } }
Now, if I trigger a request again, with a different body, let's say
{ "input":{"body":"apple20", "id":"5", "count": "ABCD"} }
I still get the previous response.
However, if I try to send the request later (let's say after 1-2 minutes), I get the correct payload.
It is not a Kotlin BE issue, it is not a graphql mesh issue. Why I do say that? Because I tried to send request straight to graphql mesh avoiding it to pass through Hasura, and I have no issue if I do it like that. I suspect is some Hasura caching issue I am not aware of.
What is the expected behaviour?
Expected behaviour is that no mater what values I send for input parameters, and no mater how quickly I send the request, I want to be able to obtain request:response, so a match 1:1. If I send apple1, to receive as response apple1. If I send HelloWorld133 to receive as response HelloWorld133 and not apple1 (previous request response)
How to reproduce the issue?
- Create a BE endpoint (Kotlin or whatever BE you want)
- Use openapi.json for schema
- create a graphql-mesh server to serve the requests
- configureHasura remote-schema
- trigger the mutation request
Screenshots or Screencast
Trial 1: I get correct response
Trial 2: Changing body, and getting same response
Please provide any traces or logs that could help here.
Inspecting payload
Inspecting response
Any possible solutions/workarounds you're aware of?
Keywords
hasura request to remote schema sends wrong payload but it works properly if I avoid sending request through Hasura
Changed OpenAPI and used a public one which I integrated in graphql mesh https://petstore.swagger.io/
This is how mesh file looks like
COPY <<EOF .meshrc.yaml
sources:
- name: Backoffice
handler:
openapi:
source: https://petstore.swagger.io/v2/swagger.json
endpoint: https://petstore.swagger.io/v2/
transforms:
- namingConvention:
mode: wrap
typeNames: pascalCase
enumValues: constantCase
fieldNames: camelCase
fieldArgumentNames: camelCase
serve:
hostname: 0.0.0.0
EOF
Still caching issue here as well
However, if there are no query variables, then no caching issue. Confirmed on both examples
bump
Update: This seem to happen only if I am using namespace on graphql customization, when adding suffix or prefix. For some reason, variable names in Hasura seem do not match entirely with the ones from remote-schema when using prefix/suffix
Hey @claudiuman
I read your case. To repro, I tested a mutation on my remote schema (which happens to be another Hasura project in my case), and I can see the results being returned as expected.
Some inputs needed from your end.
- Have you verified w/o Hasura, like sending requests to your original BE w/o Hasura involved and check the responses ? That would make sure that the issue roots from somewhere from BE.
- Also, Hasura does not cache any result from mutation. Acc to this doc, it only caches result in case if operation is
queryand it hascacheddirective. Are you facing this issue in mutation or in queries as well ?
Hi @meetzaveri. Thank you for your answer!
Just a question before: Did you added suffix/prefix on the mutation types? Because it happens only in this case
I have verified Hasura with other open source graphql remote schema as well (thinking it is a BE issue on our side). Also facing this issue when adding suffix/prefix on remote schema types. I was also checking without Hasura, sending the request straight to BE, and it works this way! I was also using Yoga Graphql instead of Hasura, and still no issue there as well!
It happens only on mutations, and only when we have added suffix. If we are not setting any suffix/prefix, then it works great! (We need this suffix/prefix feature to work in order to not modify the code on type names)
Hi there! Any news on this issue?
Bump!
bump!
bump!
I've been looking into this issue and think I might have an approach to fix it.
The problem seems to be in how Hasura generates cache keys for remote schema mutations when type transformations are applied. I suspect the cache key generation doesn't differentiate between the original variable names and the transformed ones with suffixes.
Here's what I'm thinking:
- The cache key should include both the original schema context AND the transformation config
- Variable resolution needs to happen after type name transformation, not before
- We might need separate cache namespaces for each remote schema customization
@rakeshkky I'd be happy to dig into the codebase and submit a PR if this direction makes sense. Could you point me to the relevant files for remote schema variable handling and cache key generation?
This has been blocking production use of remote schema customization for way too long - would love to help get it fixed.