argo-cd
argo-cd copied to clipboard
OpenShift 4.9.x Kubernetes 1.22 Deprecated v1beta1 APIs.
Describe the bug We are running on OpenShift 4.8.x and planning to upgrade to OpenShift 4.9.x which runs on Kubernetes 1.22. That version of Kubernetes removes all v1beta1 APIs. Additional information from RedHat: https://access.redhat.com/articles/6329921
I'm opening this bug because ArgoCD still has calls to the following: customresourcedefinitions.v1beta1.apiextensions.k8s.io ingresses.v1beta1.extensions
done by system:serviceaccount:argocd:argocd-application-controller argocd-application-controller/v0.0.0
When are you planning to update those API's to not use v1beta1? If we upgrade our OpenShift Platform to 4.9.x, will it break ArgoCD?
Screenshots
NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H certificatesigningrequests.v1beta1.certificates.k8s.io 1.22 0 0 customresourcedefinitions.v1beta1.apiextensions.k8s.io 1.22 54 1379 flowschemas.v1alpha1.flowcontrol.apiserver.k8s.io 1.21 0 0 ingresses.v1beta1.extensions 1.22 29 2950 ingresses.v1beta1.networking.k8s.io 1.22 0 0 mutatingwebhookconfigurations.v1beta1.admissionregistration.k8s.io 1.22 8 183 validatingwebhookconfigurations.v1beta1.admissionregistration.k8s.io 1.22 0 0
1.22 certificatesigningrequests.v1beta1.certificates.k8s.io 1.22 customresourcedefinitions.v1beta1.apiextensions.k8s.io 1.21 flowschemas.v1alpha1.flowcontrol.apiserver.k8s.io 1.22 ingresses.v1beta1.extensions 1.22 ingresses.v1beta1.networking.k8s.io 1.22 mutatingwebhookconfigurations.v1beta1.admissionregistration.k8s.io 1.22 validatingwebhookconfigurations.v1beta1.admissionregistration.k8s.io
*** Checking certificatesigningrequests.v1beta1.certificates.k8s.io ***
*** Checking customresourcedefinitions.v1beta1.apiextensions.k8s.io *** get system:serviceaccount:argocd:argocd-application-controller argocd-application-controller/v0.0.0
*** Checking flowschemas.v1alpha1.flowcontrol.apiserver.k8s.io ***
*** Checking ingresses.v1beta1.extensions *** watch system:kube-controller-manager cluster-policy-controller/v0.0.0 watch system:kube-controller-manager kube-controller-manager/v1.21.11+6b3cbdd list watch system:serviceaccount:argocd:argocd-application-controller argocd-application-controller/v0.0.0
*** Checking ingresses.v1beta1.networking.k8s.io ***
*** Checking mutatingwebhookconfigurations.v1beta1.admissionregistration.k8s.io *** watch system:serviceaccount:dynatrace:dynatrace-oneagent-webhook dynatrace-oneagent-operator/v0.0.0
*** Checking validatingwebhookconfigurations.v1beta1.admissionregistration.k8s.io ***
per RedHat KB: https://access.redhat.com/articles/6329921 IMPORTANT NOTE: You can safely ignore the following entries that appear in the results:
The system:serviceaccount:kube-system:generic-garbage-collector user might appear in the results because it walks through all registered APIs searching for resources to remove. The system:kube-controller-manager and system:cluster-policy-controller users might appear in the results because they walk through all resources while enforcing various policies. If OpenShift GitOps is installed in the cluster, the system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller user (refer to KCS 6635361 for additional information). Can be ignored -> https://access.redhat.com/solutions/6821411 If OpenShift Pipelines is installed in the cluster, the openshift-pipelines-operator userAgent (refer to KCS 6821411 for additional information). In OSD and ROSA clusters, or if Velero is installed in the cluster, the system:serviceaccount:openshift-velero:velero user (refer to KCS 6351332 for additional information).
Version We have many versions of ArgoCD Oldest: Chart 2.7.5 Version 1.7.6 Latest: Chart 4.5.0 Version 2.3.4
This affects GCP GKE clusters as well and delays auto-upgrade of clusters until the GKE version is EOL. This detection only registers API calls against the Kubernetes API and not deployed resources.
The error message in question:
This cluster will not be scheduled for an automatic upgrade to v1.22, the next minor version, because your API clients have used deprecated APIs in the last 30 days that are no longer available in this version. Once the cluster reaches end of life on v1.21, it would then be automatically upgraded to v1.22, but upgrading the cluster before it’s migrated to updated APIs could cause it to break.
Does the ArgoCD client make ambiguous requests detailed in this comment or can we expect an impact or further ill interactions with different type of managed Kubernetes features such as auto-upgrade while this is not fixed?
These are the API's that will be deprecated:
customresourcedefinitions.v1beta1.apiextensions.k8s.io
ingresses.v1beta1.extensions
and its being called by
system:serviceaccount:argocd:argocd-application-controller argocd-application-controller/v0.0.0
according to https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22
apiextensions.k8s.io/v1beta1
manifest should be update to apiextensions.k8s.io/v1
extensions/v1beta1
and networking.k8s.io/v1beta1
manifest should be update to networking.k8s.io/v1
All I want to know is: When are you planning to update those API's to not use v1beta1? If we upgrade our OpenShift Platform to 4.9.x, will it break ArgoCD?
same here for GKE.
We have same issue in GKE
+1
We ran into this in GKE as well, and so far what seems to be working is using resource.exclusions
to exclude ngresses.v1beta1.extensions
. Added this to the argocd-cm
ConfigMap:
data:
resource.exclusions: |
- apiGroups:
- extensions
kinds:
- Ingress
clusters:
- "*"
We ran into this in GKE as well, and so far what seems to be working is using
resource.exclusions
to excludengresses.v1beta1.extensions
. Added this to theargocd-cm
ConfigMap:data: resource.exclusions: | - apiGroups: - extensions kinds: - Ingress clusters: - "*"
It doesn't break other things on ArgoCD side ? ArgoCD is still capable to watch and list Kubernetes Ingresses ?
Anyway, I will bootstrap 2 GKE clusters, one 1.21 and another one 1.22 and will test if ArgoCD is okay on 1.22, and see what is happening when upgrading from 1.21 to 1.22.
I will put my results here when it's finished.
ETA today or next week.
Thanks @KrustyHack - that would be great information.
I was working with RedHat on another similar issue with an operator though not the same as ArgoCD. But what was interesting is that the operator would check on those deprecated API's because the scheme existed which I believe is happening here too with ArgoCD. Since those API's exist, ArgoCD is just scrapping it and checking it.
I'm also planning to update one of my lower environment clusters from OpenShift 4.8.x to 4.9.x and check if ArgoCD still works.
I did a test on a GKE cluster 1.21 :
- I deployed ArgoCD latest version (from https://argo-cd.readthedocs.io/en/stable/getting_started/ tutorial)
- I created an hello world app on in (simple Nginx with Ingress)
- I checked the logs and found deprecation warning in GCP logs
- I upgraded my GKE cluster to 1.22
- I checked my ArgoCD installation, all seems fine
- I checked the logs again. No more deprecation warning but I still got these API request :
methodName: "io.k8s.extensions.v1beta1.ingresses.patch"
@KrustyHack
It doesn't break other things on ArgoCD side ? ArgoCD is still capable to watch and list Kubernetes Ingresses ?
We're not actually using them - that was why the calls to the deprecated API were confusing.
After upgrading the node pools to 1.22 like the control plane, I still can see the deprecated call but it seems not be broken.
My logs :
labels: {
authorization.k8s.io/decision: "allow"
authorization.k8s.io/reason: "RBAC: allowed by ClusterRoleBinding "argocd-application-controller" of ClusterRole "argocd-
application-controller" to ServiceAccount "argocd-application-controller/argocd""
k8s.io/deprecated: "true"
k8s.io/removed-release: "1.22"
}
logName: "projects/bits-labs/logs/cloudaudit.googleapis.com%2Factivity"
protoPayload: {
@type: "type.googleapis.com/google.cloud.audit.AuditLog"
authenticationInfo: {
principalEmail: "system:serviceaccount:argocd:argocd-server"
}
methodName: "io.k8s.extensions.v1beta1.ingresses.patch"
resourceName: "extensions/v1beta1/namespaces/test/ingresses/test-hello"
serviceName: "k8s.io"
}
The ingress is patched without any problem... I don't understand, lol :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"xxx":"HEALTHY","xxx":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: xxx
ingress.kubernetes.io/target-proxy: xxx
ingress.kubernetes.io/url-map: xxx
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"labels":{"app":"hello","app.kubernetes.io/instance":"test","chart":"hello-1.0","heritage":"Helm","release":"test"},"name":"test-hello","namespace":"test"},"spec":{"rules":[{"host":"chart-example1.local","http":{"paths":[{"backend":{"serviceName":"test-hello","servicePort":80},"path":"/"}]}}]}}
test: ok
[...]
To me it seems like the root of the issue is that kubect api-resources
is returning the extensions/v1beta1
version of the Ingress
, and ArgoCD is caching that resource type and then polling. So maybe the real question is why is that resource type being returned even after the control plane and all nodes are updated to v1.22
... 🤔
But besides this mysterious question, can we say that it is safe to go up to 1.22 on our clusters? From what I see, I would say yes. What do you think about it?
To me it seems like the root of the issue is that
kubect api-resources
is returning theextensions/v1beta1
version of theIngress
, and ArgoCD is caching that resource type and then polling. So maybe the real question is why is that resource type being returned even after the control plane and all nodes are updated tov1.22
... thinking
I have been tracing with the same issue as well for days... I got a response from google support that explains why: If your GKE cluster is created on version 1.22 and later, the beta API versions are not served.
However, if your cluster is upgraded from GKE version 1.21 and earlier to versions 1.22.7 and later can still use the beta API versions until version 1.23.
I guess now, we need to find a way stop argoCD from listing deprecated beta resources (like the ingress from extensions/v1beta1). I was looking into disabling that api from the kube-apiserver, but we can't since kube-apiserver is controlled by Google in our Master control plane.
@cnjohnniekwok thanks for the info from Google! It doesn't make a lot of sense to me why that would be the case, but it's good they have an explanation at least.
I think ArgoCD is going to just be looking at all* resources that are returned from a call like kubectl api-resources
which does include the deprecated API.
* there are definitely some that it's hard-coded to ignore
We were able to use resource.exclusions to work around this particular deprecated API.
True, resource exclusions does the trick :D
I've had similar experience contacting google support. They are pointing fingers at ArgoCD and refusing to further investigate as to why they are returning deprecated API resource. Looks like we will have to use the exclusion moving on.
We are running into this issue with GKE as well. Will try out the exclusion trick in the meantime.
Hi, I too was having trouble with the deprecated API: customresourcedefinitions.v1beta1.apiextensions.k8s.io calls.
After some research, it seems that ArgoCD calls the v1beta1 API in ArgoCD v2.2 or earlier. The solution was to bring ArgoCD up to date (v2.3 or higher) as this has been fixed in #8515 .
In case it's helpful for anyone else, we also used the following to identify ArgoCD applications with manifests containing specifically v1beta1 resources (yq
available here):
kubectl -n argocd get applications.argoproj.io -o yaml \
| yq '"v1beta1" as $target | .items[] | select( . as $resource | any($resource.status.resources[].version; . == $target)) | { name: .metadata.name, resources: (.status.resources[] | select( .version == $target )) }' -
Once we updated the application sources for those, the resource.exclusions
fix above worked for us to prevent ArgoCD from periodically querying endpoints for all resource types returned by kubectl api-resources
.
Facing the same issue, we also added the following to argocd-cm.yml but deprecated calls still happen...
data:
resource.exclusions: |
- apiGroups:
- extensions/v1beta1
kinds:
- Ingress
clusters:
- "*"
extensions/v1beta1
is the exact api group name, but shall we set extensions
instead?
Facing the same issue, we also added the following to argocd-cm.yml but deprecated calls still happen...
data: resource.exclusions: | - apiGroups: - extensions/v1beta1 kinds: - Ingress clusters: - "*"
extensions/v1beta1
is the exact api group name, but shall we setextensions
instead?
Yes, in our case we were able to stop argo from listing the deprecated API by excluding the extensions
API group.
resource.exclusions: |
- apiGroups:
- extensions
kinds:
- Ingress
clusters:
- "*"
Thanks @cnjohnniekwok it works! But now I am wondering how to exclude /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions
.
It does the trick for Ingress because extensions/v1beta1
is replaced by networking.k8s.io/v1
as per this doc. In other terms we can safely exclude extensions for Ingress kind. However, for CustomResourceDefinition we have to specify the version because apiextensions.k8s.io/v1beta
is replaced by apiextensions.k8s.io/v1
, hence we must specify v1beta in exclusion...
How are we supposed to specify version in argocd resource exclusions?
Ok, the feedback I got was that ArgoCD should be honoring the preferredVersion
field:
% kubectl get --raw /apis/networking.k8s.io|jq .
{
"kind": "APIGroup",
"apiVersion": "v1",
"name": "networking.k8s.io",
"versions": [
{
"groupVersion": "networking.k8s.io/v1",
"version": "v1"
},
{
"groupVersion": "networking.k8s.io/v1beta1",
"version": "v1beta1"
}
],
"preferredVersion": {
"groupVersion": "networking.k8s.io/v1",
"version": "v1"
}
}
It sounds like ArgoCD should only be using preferredVersion
APIs. If it did that, GKE Autopilot/OpenShift/friends wouldn't be screaming.
Thanks @cnjohnniekwok it works! But now I am wondering how to exclude
/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions
.It does the trick for Ingress because
extensions/v1beta1
is replaced bynetworking.k8s.io/v1
as per this doc. In other terms we can safely exclude extensions for Ingress kind. However, for CustomResourceDefinition we have to specify the version becauseapiextensions.k8s.io/v1beta
is replaced byapiextensions.k8s.io/v1
, hence we must specify v1beta in exclusion...How are we supposed to specify version in argocd resource exclusions?
@meons Presuming they are not misusing the term apiGroups
to actually mean groupVersion
or apiVersion
, there is no way of doing what you're asking. You would need to put in a feature request to add a versions
field to resource exclusion/inclusion.
BTW, if you are trying to hunt down CRDs that argocd may be applying that are deprecated you might want to try the following command: for app in $(argocd app list -o name); do argocd app manifests $app | yq e 'select(.apiVersion == "apiextensions.k8s.io/v1beta1") | .metadata.name'; done
. You'll need the argocd CLI setup. But using kubectl
will throw you back the served version which won't be what argocd is trying to apply.