velero
velero copied to clipboard
restore with object metadata.ownerreference and metadata.finalizer
Describe the problem/challenge you have [A description of the current limitation/problem/challenge that you are experiencing.]
In my scene, every deployment or statefulset will have a releated service if it is defined ports in podTemplate. When the deployment or statefulset is deleted, then the service related will be deleted by k8s gc as it has a owner reference to the deployemnt or statefulset.
I use velero to backup and restore for coping namespace from cluster a to cluster b, but I find the service in cluster b doesn't have ownerreference. After searching code, I find that may be releated to resetMetadataAndStatus
https://github.com/vmware-tanzu/velero/blob/1e24d6ce718fc8fee890db34643e5be361fa5c7d/pkg/restore/restore.go#L1576 which delete all metadata other than name/namespace/labels/annotations.
Describe the solution you'd like [A clear and concise description of what you want to happen.]
restore with ownerreference and finalizer.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Issues or PRs may related:
- https://github.com/vmware-tanzu/velero/pull/837/files
- https://github.com/vmware-tanzu/velero/issues/2632
Environment:
- Velero version (use
velero version
): v1.8.0 - Kubernetes version (use
kubectl version
): v1.23.2 - Kubernetes installer & version: kubeadm-v1.23.2
- Cloud provider or hardware configuration: bare metal
- OS (e.g. from
/etc/os-release
): ubuntu-20.04.1 LTS (Focal Fossa)
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "The project would be better with this feature added"
- :-1: for "This feature will not enhance the project in a meaningful way"
Service example:
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: my-nginx
namespace: nginx-example
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: deployment
name: nginx-deployment
uid: 13b53874-028e-4cc0-8a5f-ac413cb4bba9
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
@polym
Thanks for the issue.
I think if we keep the ownerReferences
in resetMetadataAndStatus
your problem may be solved? I'm not sure if it's OK to replicate the uid
field.
This change was made years before I started to work on this project, so I think I need to discuss it with other maintainers regarding whether there's negative impacts if we decide to keep the ownerReferences
during the restore.
I'm not sure if it's OK to replicate the uid field.
@reasonerjt AFAIK, uid in the K8s cluster is unique, so, replicating the uid field will meet problems if copying ns1 to ns2 in the same K8s cluster.
New uid
is fine in my scene. Now, my solution is this.
- Use velero restore plugin mechanism, to inject ownerreference/finializer which is encoded by json to annotation.
- Start a long-run program, to list K8s resources periodly, filter out the resources which contain ownerreference/finilizer annotation, get
uid
, and update the resources object metadata fields.
Otherwise, is it possible to control the order of creating resources, as uid in ownerreference is required. The solution will be more native If velero can control the order.
@Lyndon-Li any progress on this?
@bsctl This is a sophisticated problem for Velero, that is, Velero lacks a mechanism to handle resources' dependencies. We will do a comprehensive design, instead of just fixing the current problem. It takes time and also we need to find a proper release to take it in. So please keep tune on this issue, we will update once there is any progress.
We will keep adding/linking the resource dependency related requirements/issues here, so that we have enough idea of how to make an ultimate solution.
One more dependency case is the service's ExternalName feature: https://kubernetes.io/docs/concepts/services-networking/service/#externalname:~:text=pod%3A%20true-,Type%20ExternalName,-Services%20of%20type
By ExternalName, a service may be redirected to a target service in a different namespace. In a restore scenario, if the target service's namespace is changed, the source service will not work if the externalName field does not change.
Another problem could be solved by a dependency solution: A sophisticated restore order could be defined according to resources' dependencies, so that there is no conflict for restore. An example here #5068