Unusual warnings were seen for Velero backups.
What steps did you take and what happened: Installed Velero version 1.9.0 with Restic integration. After successful schedules deployed to take backups for (cluster-resources, namespaces or Pod volumes) on kubernetes clusters on GCP. Warnings are seen from 1st backup of can't backup some resources.
What did you expect to happen:
Expected behavior with no warnings on the backups
The following information will help us better understand what's going on:
Below is the Log Output from backup while filtering only warnings. This behavior is seen on existing multiple ENV clusters with heavy workloads and no workloads.
velero backup logs nsbackup-hourly-20220902202933 | grep warning time="2022-09-02T20:30:10Z" level=warning msg="Additional item was not found in Kubernetes API, can't back it up" backup=velero/nsbackup-hourly-20220902202933 groupResource=clusterroles.rbac.authorization.k8s.io logSource="pkg/backup/item_backupper.go:337" name=canonical-service-proxy-role namespace= resource=serviceaccounts
time="2022-09-02T20:30:13Z" level=warning msg="Additional item was not found in Kubernetes API, can't back it up" backup=velero/nsbackup-hourly-20220902202933 groupResource=clusterroles.rbac.authorization.k8s.io logSource="pkg/backup/item_backupper.go:337" name=mdp-controller namespace= resource=serviceaccounts
I could attach the debug bundle, but it does contains the details of bucket endpoints.
Additional details on describing the backup
vagrant@me ~$ velero describe backup nsbackup-hourly-20220902202933 --details Name: nsbackup-hourly-20220902202933 Namespace: velero Labels: velero.io/schedule-name=nsbackup-hourly velero.io/storage-location=default Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"velero.io/v1","kind":"Schedule","metadata":{"annotations":{},"name":"nsbackup-hourly","namespace":"velero"},"spec":{"schedule":"@every 1h","template":{"hooks":{},"includedNamespaces":["*"],"ttl":"720h0m0s"}}}
velero.io/source-cluster-k8s-gitversion=v1.21.13-gke.900 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=21
Phase: Completed
Errors: 0 Warnings: 2
Namespaces:
Included: *
Excluded:
Resources:
Included: *
Excluded:
Label selector:
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
Hooks:
Backup Format Version: 1.1.0
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
-
Velero version : v1.9.0
-
Velero features (use
velero client config get features):
features: <NOT SET> -
Kubernetes version (use
kubectl version): v1.21.14-gke.700 -
Kubernetes installer & version:
-
Cloud provider or hardware configuration: Google cloud
-
OS (e.g. from
/etc/os-release):
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "I would like to see this bug fixed as soon as possible"
- :-1: for "There are more important bugs to focus on right now"
time="2022-09-02T20:30:10Z" level=warning msg="Additional item was not found in Kubernetes API, can't back it up" backup=velero/nsbackup-hourly-20220902202933 groupResource=clusterroles.rbac.authorization.k8s.io logSource="pkg/backup/item_backupper.go:337" name=canonical-service-proxy-role namespace= resource=serviceaccounts
According to the warning msg, when executing backup item action, the clusterrole resource named "canonical-service-proxy-role" was not found in Kubernetes API, so this object was excluded from the backup. Please check if this object exist under the namespace and serve the purpose. If not, it won't affect the result.
Thanks for your response @allenxu404 ,
I don't see such clusterrole resource name in my cluster while filtering/checking on all the namespaces as there is no NAMESPACE name mentioned in the log.
Any idea or guess why do we see such behavior?
I don't see such clusterrole resource name in my cluster while filtering/checking on all the namespaces as there is no NAMESPACE name mentioned in the log.
Any idea or guess why do we see such behavior?
So the warning won't affect the restore from that backup. I guess the cause of the warning relates to the backup item action you're using.
Closing this issue b/c the question has been answered.