EntityFramework.Docs
                                
                                 EntityFramework.Docs copied to clipboard
                                
                                    EntityFramework.Docs copied to clipboard
                            
                            
                            
                        Code-first migration in high-replicated states
Hello all,
I've been working with EF Core using the Code First approach with migrations. As I move towards deploying in a Kubernetes cluster, I've encountered challenges related to handling migrations in a highly replicated environment.
Problem
During a rolling release with multiple replicas, if one pod initiates a migration that introduces breaking changes, other pods might not be able to handle the new database schema. This can lead to inconsistencies and potential failures.
Questions
- How can EF Core migrations be safely managed in a rolling release scenario in Kubernetes?
- Are there recommended patterns or practices for ensuring that all pods can handle both the old and new schema during the transition?
- How can we prevent simultaneous migrations from multiple pods?
Any guidance or best practices in this area would be greatly appreciated. I aim to achieve zero-downtime deployments without resorting to a complete service swap.
Thank you for your assistance!
Are there recommended patterns or practices for ensuring that all pods can handle both the old and new schema during the transition?
Zero-downtime migrations are complex, and are generally a matter of never doing incompatible/breaking changes in a migration, but rather deploying a backwards-compatible migration for the transition phase, followed by a second "cleanup" migration. For example, rather than renaming a column (breaking the application instances which haven't been upgraded), you can deploy a migration only adding a new column, and later - after all instances have been upgraded to the new code version - delpoy a second migration removing the old one. Ensuring that your code works correctly during the transitional state is the responsibility of your application, and requires careful planning.
We should indeed at least have some minimal docs on this; I haven't found an issue tracking that (/cc @ajcvickers @bricelam), we can maybe use this issue for that.
How can we prevent simultaneous migrations from multiple pods?
We very explicitly discourage applying migrations from your applications - especially when your application has multiple instances; see our docs. Consider applying migrations to the database as part of your deployment (and not application startup), using either migration bundles or SQL scripts.
Thank you for the insight.
In the Kubernetes environment it is often a bit more complex than expected. Of course, I could "simply" run a SQL migration script during my Azure Devops pipeline, which then pushes a new image to the Docker Registry.
Of course, newer strategies like GitOps through e.g. ArgoCD and co. make this even more difficult, since it's always my K8s schema file in a Git repo that specifies the version.
Maybe someone here who already has experience in the environment has an idea how you could implement something like that separately. My approach would probably be an init container.
@roji Is there any way to automatically detect if a migration (bundle) might introduce breaking changes? A CD pipeline should be able to take this into account - for the self-protection of a developer.
Note from triage: document blue/green migrations approaches.