amplify-codegen
amplify-codegen copied to clipboard
Configure validation of the graphql schema
Before opening, please confirm:
- [X] I have installed the latest version of the Amplify CLI (see above), and confirmed that the issue still persists.
- [X] I have searched for duplicate or closed issues.
- [X] I have read the guide for submitting bug reports.
- [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
How did you install the Amplify CLI?
npm
If applicable, what version of Node.js are you using?
v18.15.0
Amplify CLI Version
11.0.3
What operating system are you using?
Windows
Amplify Codegen Command
codegen
Describe the bug
During code generation for typescript types, a union causes type generation to fail due to conflicting types on the schema
Expected behavior
It generates the correct graphqlTypes.ts
without issue for conflicting types. If necessary provide configuration to turn off graphql validation for this rule
Reproduction steps
- Setup codegen with defaults
- Use the supplied graphql schema
- Run
amplify codegen
GraphQL schema(s)
# Put schemas below this line
schema {
query: Query
}
type Query{
get: [Unioned]
}
union Unioned = Simple | More_Complex
type Complex {
foo: String
}
type More_Complex {
value: Complex
}
type Simple {
value: String
}
Log output
# Put your logs below this line
- Generating.../XX/graphql/queries.ts: Fields "value" conflict because they return conflicting types "String" and "Complex". Use different aliases on the fields to fetch both if this was intentional.
.../XX/graphql/queries.ts: Fields "value" conflict because they return conflicting types "String" and "Complex". Use different aliases on the fields to fetch both if this was intentional.
β Validation of GraphQL query document failed
Additional information
Here is the output with graphql validation turned off. This is what I want
/* tslint:disable */
/* eslint-disable */
// This file was automatically generated and should not be edited.
export type Unioned = Simple | More_Complex
export type Simple = {
__typename: "Simple",
value?: string | null,
};
export type More_Complex = {
__typename: "More_Complex",
value?: Complex | null,
};
export type Complex = {
__typename: "Complex",
foo?: string | null,
};
export type GetQuery = {
get: Array<( {
__typename: "Simple",
value?: string | null,
} | {
__typename: "More_Complex",
value?: {
__typename: string,
foo?: string | null,
} | null,
}
) | null > | null,
};
Heyπ thanks for raising this! I'm going to transfer this over to our API repository for better assistance π
I haven't had a huge amount of help with my other issues on the API repo I'm afraid @ykethan, if you have any ideas I'd love to know!
I have opened an AWS support case for this particular issue too, but it seems like something with Amplify caused it to fail and get into a bad state, hopefully support can help me recover it.
Hey @pr0g, I apologize for the inconvenience you've experienced. Have you been contacted by a member of our support team regarding this issue?
Hi @AnilMaktala, thanks for getting back to me. That's okay, I'm speaking to someone from support who's contacted the CloudFormation team to help restore it. One of the nested stacks is reporting to its parent that it's in UPDATE_COMPLETE, but internally it's in UPDATE_ROLLBACK_FAILED, so the rollback can't be continued from the root stack. Apparently that's a symptom of Drift, but I don't know how that could have happened as I was using the Amplify CLI to perform all operations. I'll report back with an update hopefully when it's sorted. Thanks!
From the description, it is most likely that the failure of the deployment is caused by this
I made this change, along with a few other updates to amplify.json
As you mention about the error comes from ConnectionStack
, it should be related to the feature flag regarding the connection changes rather than the auth resolver ones. I examine the diff file and find that there are other flags added apart from populateOwnerFieldForStaticGroupAuth: true
.
As a workaround for the update rollback failure, I notice there are already steps mentioned by another customer (see https://github.com/aws-amplify/amplify-category-api/issues/2157#issuecomment-2017234685) about adding the dummy resolvers for those with errors (in your case the ones in connection stacks), which should be helpful for you to resolve the rollback issue.
Once you rollback successfully, I suggest only keeping the populateOwnerFieldForStaticGroupAuth: true
but remove the other changes in the diff, which should prevent unintended changes/failures to the resolvers/resources.
Hi @AaronZyLee,
Thanks for your reply. Yes in hindsight I should have been a good scientist and only changed one thing at a time (lesson learned again π). The reason I updated these flags is I'd been meaning to do it after @ykethan suggested I do it in this post. I realize I probably should have done this after though (less haste more speed).
I did see the post you mentioned, but unfortunately I don't think it will work for me because the root stack doesn't think it's in an UPDATE_ROLLBACK_FAILED
state, only the nested stacks do, so I'm basically stuck without intervention from AWS support (I will follow-up with them again tomorrow to see where things have go to).
Might it be possible to delete the ConnectionStack
and have it get redeployed? I've shied away from doing that because I didn't want to make things worse, but maaaybe that might work?
Thanks for the feedback and it's good to know for future, but ideally now I just need a way of recovering things and getting back to a good state.
Hey @pr0g, Are you still experiencing this issue?
Hi @AnilMaktala,
Thanks for following-up, I was able to talk to AWS support and was able to get my CloudFormation stack back to UPDATE_ROLLBACK_COMPLETE, unfortunately when I try and do an Amplify push things are still failing. I've been talking with the AWS Amplfy support team and have managed to narrow things down a bit.
I'm going to try and sync back to earlier in our Git history when this problem occurred and do an amplify push --force
to see if I can get the deployment to succeed (this is after deleting the deployment.json file in S3). I think that now that the environment is so out of sync with what's in Git, trying to do an amplify push
now is causing problems (the failure happens when trying to update a model/table that's been removed).
I'm going to try and get to this later this week and will leave an update if that works.
Thanks!
This issue is now closed. Comments on closed issues are hard for our team to see. If you need more assistance, please open a new issue that references this one.