flagger
flagger copied to clipboard
Is it possible for canary use separate deployment configuration?
Hi, When we use canary deployment for our apps, sometimes the deployment failed. For canary, it scales down the canary pod and route traffic back to primary. I some cases, new canary app might update the database structures automatically. And since the primary and canary share the same configurations, it means that if the canary deployment is rolled back, the database structure will not be changed back. It could cause problems for primary deployment. And i think this could happen if there’s a bug for canary or mis-configurations which make canary failed to start. Is there any workaround for this case?
Thanks
I'm running into this as well. I have a pre-install hook to run db migrations. Combined with Flux and Helm-Operator i end up with 2 workloads, both of which try to run the migration. One of them works, the other fails to start the init-containers and eventually marks the entire deployment as failed in Flux.
Are Jobs officially supported? I'm pretty stuck on this right now.
I'm running into this as well. I have a pre-install hook to run db migrations. Combined with Flux and Helm-Operator i end up with 2 workloads, both of which try to run the migration. One of them works, the other fails to start the init-containers and eventually marks the entire deployment as failed in Flux.
Are Jobs officially supported? I'm pretty stuck on this right now.
Hi , for db migration, I use flyway for database versioning. It can prevent double updating db caused by multiple instances. Because if it detects the new version of db structure, the other pod will skip db migration when starting.
But this won’t solve the issue when canary rolled back. The changed shared database structure will cause primary malfunctioning.
You can use a post-rollout hook and call into a service that does a migration rollback if the canary phase is Failed
.
Any flag can check if the status is promoted or rolled back? And one more things is that the primary might have critical error during the process. Only after the canary is promoted or database is rolled back when failed then the primary can back to normal. But the data during that time can be corrupted
how about just using pod labels to switch application's behavior between primary and canary (for example, use separated databases for primary/canary)? We can embed labels by fieldRef
to environment variables.
@mathetake, the pod labels in primary and canary are identical, so, currently this is not possible. Please check this issue: #1547