karmada
karmada copied to clipboard
argo rollout CRD resources submit member cluster to create Work resource field types are inconsistent
Environment:
Karmada version: v1.5.0 Kubernetes version: v1.20.8
use argo rollout, install crd and sample reference: https://argo-rollouts.readthedocs.io/en/stable/installation/ https://argo-rollouts.readthedocs.io/en/stable/features/canary/
First install argo rollout crd on karmada apiserver.
k get crd rollouts.argoproj.io
Then submit the argo rollout resource to karmada
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
labels:
app: nginx
name: nginx-rollout
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
canary:
maxSurge: 100%
maxUnavailable: 0
steps:
- setWeight: 40
- pause: {}
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Query Work resources event:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning SyncFailed 9m46s (x19 over 31m) execution-controller Failed to sync work(nginx-rollout-6fc47b96ff) to cluster(member-1): Rollout.argoproj.io "nginx-rollout" is invalid: spec.strategy.canary.steps: Invalid value: "array": spec.strategy.canary.steps in body must be of type object: "array"
Query Work resource Manifest spec.strategy.canary.steps definition
apiVersion: work.karmada.io/v1alpha1
kind: Work
metadata:
....
spec:
workload:
manifests:
- apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"argoproj.io/v1alpha1","kind":"Rollout","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx-rollout","namespace":"default"},"spec":{"progressDeadlineSeconds":600,"replicas":2,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"nginx"}},"strategy":{"canary":{"maxSurge":"100%","maxUnavailable":0,"steps":[{"setWeight":40},{"pause":{}}]}},"template":{"metadata":{"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx","imagePullPolicy":"Always","name":"nginx","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}}}
....
spec:
...
strategy:
canary:
maxSurge: 100%
maxUnavailable: 0
steps:
- setWeight: 40
- []
......
status:
conditions:
- lastTransitionTime: "2023-06-01T09:31:38Z"
message: 'Failed to apply all manifests (0/1): Rollout.argoproj.io "nginx-rollout"
is invalid: spec.strategy.canary.steps: Invalid value: "array": spec.strategy.canary.steps
in body must be of type object: "array"'
reason: AppliedFailed
status: "False"
type: Applied
Through the definition of Work Manifest spec.strategy.canary.steps, it is found that the type of 'object'{} has changed to 'array'[], which does not conform to the original type definition, so the submission will fail.
Through the definition of Work Manifest spec.strategy.canary.steps, it is found that the type of 'object'{} has changed to 'array'[], which does not conform to the original type definition, so the submission will fail.
I'm trying to figure out why Karmada would change the field type. Please let me know if you have investigated it.
use argo rollout, install crd and sample reference: https://argo-rollouts.readthedocs.io/en/stable/installation/ https://argo-rollouts.readthedocs.io/en/stable/features/canary/
Which version of argo-rollouts
are you using? I'm trying to reproduce it on my side.
Through the definition of Work Manifest spec.strategy.canary.steps, it is found that the type of 'object'{} has changed to 'array'[], which does not conform to the original type definition, so the submission will fail.
I'm trying to figure out why Karmada would change the field type. Please let me know if you have investigated it.
I preliminarily think that it is caused by filling serialized resources into manifests when creating Work resources. It may be related to the empty object, because the type of assignment in the object will not change. When the object is empty, it will become an array type.
use argo rollout, install crd and sample reference: argo-rollouts.readthedocs.io/en/stable/installation argo-rollouts.readthedocs.io/en/stable/features/canary
Which version of
argo-rollouts
are you using? I'm trying to reproduce it on my side.
Tried both latest and 1.1
I tried to reproduce it on my side against Karmada v1.5.0
, it works well, here is my operations:
- launch the testing environment by
hack/local-up-karmada.sh
based on Karmada v1.5.0. - Apply
Rollout
CRDs on bothKarmada APIserver
andmember1
as per https://argo-rollouts.readthedocs.io/en/stable/installation/#controller-installation. - Apply the
Rollout
yaml in the issue description on above. - Apply a
PropagationPolicy
as follows:
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
resourceSelectors:
- apiVersion: argoproj.io/v1alpha1
kind: Rollout
name: nginx-rollout
placement:
clusterAffinity:
clusterNames:
- member1
I can see the Applied
status is true in Work
:
# kubectl get works.work.karmada.io -n karmada-es-member1
NAME APPLIED AGE
nginx-rollout-6fc47b96ff True 15s
...
Part of Work
is:
# kubectl get works.work.karmada.io -n karmada-es-member1 nginx-rollout-6fc47b96ff -o yaml
apiVersion: work.karmada.io/v1alpha1
kind: Work
metadata:
finalizers:
- karmada.io/execution-controller
generation: 1
labels:
resourcebinding.karmada.io/key: 8b774b865
name: nginx-rollout-6fc47b96ff
namespace: karmada-es-member1
resourceVersion: "1675"
uid: 0948193d-1c9c-41d0-9b13-7deaa565ba28
spec:
workload:
manifests:
- apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: nginx-rollout
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
canary:
maxSurge: 100%
maxUnavailable: 0
steps:
- setWeight: 40
- pause: {}
I can see that the .spec.strategy.canary.steps
didn't change.
I tried to reproduce it on my side against
Karmada v1.5.0
, it works well, here is my operations:
- launch the testing environment by
hack/local-up-karmada.sh
based on Karmada v1.5.0.- Apply
Rollout
CRDs on bothKarmada APIserver
andmember1
as per argo-rollouts.readthedocs.io/en/stable/installation/#controller-installation.- Apply the
Rollout
yaml in the issue description on above.- Apply a
PropagationPolicy
as follows:apiVersion: policy.karmada.io/v1alpha1 kind: PropagationPolicy metadata: name: nginx-propagation spec: resourceSelectors: - apiVersion: argoproj.io/v1alpha1 kind: Rollout name: nginx-rollout placement: clusterAffinity: clusterNames: - member1
I can see the
Applied
status is true inWork
:# kubectl get works.work.karmada.io -n karmada-es-member1 NAME APPLIED AGE nginx-rollout-6fc47b96ff True 15s ...
Part of
Work
is:# kubectl get works.work.karmada.io -n karmada-es-member1 nginx-rollout-6fc47b96ff -o yaml apiVersion: work.karmada.io/v1alpha1 kind: Work metadata: finalizers: - karmada.io/execution-controller generation: 1 labels: resourcebinding.karmada.io/key: 8b774b865 name: nginx-rollout-6fc47b96ff namespace: karmada-es-member1 resourceVersion: "1675" uid: 0948193d-1c9c-41d0-9b13-7deaa565ba28 spec: workload: manifests: - apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: nginx-rollout namespace: default spec: progressDeadlineSeconds: 600 replicas: 2 revisionHistoryLimit: 10 selector: matchLabels: app: nginx strategy: canary: maxSurge: 100% maxUnavailable: 0 steps: - setWeight: 40 - pause: {}
I can see that the
.spec.strategy.canary.steps
didn't change.
I tried the above steps and there is no problem ,then I find the inconsistency with the error ,
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
resourceSelectors:
- apiVersion: argoproj.io/v1alpha1
kind: Rollout
placement:
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
clusterAffinity:
clusterNames:
- member1
Adding the field replicaScheduling
will reproduce the problem .
I still can't reproduce with the replicaScheduling
:
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
resourceSelectors:
- apiVersion: argoproj.io/v1alpha1
kind: Rollout
name: nginx-rollout
placement:
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
clusterAffinity:
clusterNames:
- member1
I still can't reproduce with the
replicaScheduling
:apiVersion: policy.karmada.io/v1alpha1 kind: PropagationPolicy metadata: name: nginx-propagation spec: resourceSelectors: - apiVersion: argoproj.io/v1alpha1 kind: Rollout name: nginx-rollout placement: replicaScheduling: replicaDivisionPreference: Weighted replicaSchedulingType: Divided clusterAffinity: clusterNames: - member1
I tested it several times and it is indeed possible to reproduce it through replicaScheduling
. I will take out other karmada component parameters to see if there is any difference.
karmada-controller-manager:
- command:
- /bin/karmada-controller-manager
- --kubeconfig=/etc/kubeconfig
- --bind-address=0.0.0.0
- --cluster-status-update-frequency=10s
- --secure-port=10357
- --leader-elect-resource-namespace=karmada-system
- --v=4
image: docker.io/karmada/karmada-controller-manager:v1.5.0
karmada-webhook:
- command:
- /bin/karmada-webhook
- --kubeconfig=/etc/kubeconfig
- --bind-address=0.0.0.0
- --secure-port=8443
- --cert-dir=/var/serving-cert
- --v=4
image: docker.io/karmada/karmada-webhook:v1.5.0 - command:
- /bin/karmada-webhook
- --kubeconfig=/etc/kubeconfig
- --bind-address=0.0.0.0
- --secure-port=8443
- --cert-dir=/var/serving-cert
- --v=4
image: docker.io/karmada/karmada-webhook:v1.5.0
karmada-aggregated-apiserver:
- command:
- /bin/karmada-aggregated-apiserver
- --kubeconfig=/etc/kubeconfig
- --authentication-kubeconfig=/etc/kubeconfig
- --authorization-kubeconfig=/etc/kubeconfig
- --etcd-servers=https://etcd-0.etcd.karmada-system.svc.cluster.local:2379
- --etcd-cafile=/etc/karmada/pki/etcd-ca.crt
- --etcd-certfile=/etc/karmada/pki/etcd-client.crt
- --etcd-keyfile=/etc/karmada/pki/etcd-client.key
- --tls-cert-file=/etc/karmada/pki/karmada.crt
- --tls-private-key-file=/etc/karmada/pki/karmada.key
- --audit-log-path=-
- --feature-gates=APIPriorityAndFairness=false
- --audit-log-maxage=0
- --audit-log-maxbackup=0
image: docker.io/karmada/karmada-aggregated-apiserver:v1.5.0
karmada-scheduler
- command:
- /bin/karmada-scheduler
- --kubeconfig=/etc/kubeconfig
- --bind-address=0.0.0.0
- --secure-port=10351
- --enable-scheduler-estimator=true
- --leader-elect=true
- --leader-elect-resource-namespace=karmada-system
- --v=4
image: docker.io/karmada/karmada-scheduler:v1.5.0
karmada-search:
- command:
- /bin/karmada-search
- --kubeconfig=/etc/kubeconfig
- --authentication-kubeconfig=/etc/kubeconfig
- --authorization-kubeconfig=/etc/kubeconfig
- --etcd-servers=https://etcd-0.etcd.karmada-system.svc.cluster.local:2379
- --etcd-cafile=/etc/karmada/pki/etcd-ca.crt
- --etcd-certfile=/etc/karmada/pki/etcd-client.crt
- --etcd-keyfile=/etc/karmada/pki/etcd-client.key
- --tls-cert-file=/etc/karmada/pki/karmada.crt
- --tls-private-key-file=/etc/karmada/pki/karmada.key
- --audit-log-path=-
- --feature-gates=APIPriorityAndFairness=false
- --audit-log-maxage=0
- --audit-log-maxbackup=0
image: docker.io/karmada/karmada-search:v1.5.0
karmada-apiserver:
- command:
- kube-apiserver
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/karmada/pki/ca.crt
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/karmada/pki/etcd-ca.crt
- --etcd-certfile=/etc/karmada/pki/etcd-client.crt
- --etcd-keyfile=/etc/karmada/pki/etcd-client.key
- --etcd-servers=https://etcd-0.etcd.karmada-system.svc.cluster.local:2379
- --bind-address=0.0.0.0
- --kubelet-client-certificate=/etc/karmada/pki/karmada.crt
- --kubelet-client-key=/etc/karmada/pki/karmada.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --disable-admission-plugins=StorageObjectInUseProtection,ServiceAccount
- --runtime-config=
- --apiserver-count=1
- --secure-port=5443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/karmada/pki/karmada.key
- --service-account-signing-key-file=/etc/karmada/pki/karmada.key
- --service-cluster-ip-range=10.96.0.0/12
- --proxy-client-cert-file=/etc/karmada/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/karmada/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/karmada/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --tls-cert-file=/etc/karmada/pki/apiserver.crt
- --tls-private-key-file=/etc/karmada/pki/apiserver.key
image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.8
kube-controller-manager:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubeconfig
- --authorization-kubeconfig=/etc/kubeconfig
- --bind-address=0.0.0.0
- --client-ca-file=/etc/karmada/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-name=karmada-apiserver
- --cluster-signing-cert-file=/etc/karmada/pki/ca.crt
- --cluster-signing-key-file=/etc/karmada/pki/ca.key
- --controllers=namespace,garbagecollector,serviceaccount-token,ttl-after-finished,bootstrapsigner,tokencleaner,csrapproving,csrcleaner,csrsigning
- --kubeconfig=/etc/kubeconfig
- --leader-elect=true
- --leader-elect-resource-namespace=karmada-system
- --node-cidr-mask-size=24
- --root-ca-file=/etc/karmada/pki/ca.crt
- --service-account-private-key-file=/etc/karmada/pki/karmada.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
- --v=4
image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.8
etcd:
- command:
- /usr/local/bin/etcd
- --config-file=/etc/etcd/etcd.conf
image: registry.aliyuncs.com/google_containers/etcd:3.5.3-0
image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.8
I see you are using kube-apiserver:v1.20.8
(I'm using image: registry.k8s.io/kube-apiserver:v1.25.2
).
Please check if the .spec.strategy.canary.steps
changed on Karmada apiserver.
image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.8
I see you are using
kube-apiserver:v1.20.8
(I'm usingimage: registry.k8s.io/kube-apiserver:v1.25.2
). Please check if the.spec.strategy.canary.steps
changed on Karmada apiserver.
This problem still exists with v1.25.2
I still can't reproduce with the
replicaScheduling
:apiVersion: policy.karmada.io/v1alpha1 kind: PropagationPolicy metadata: name: nginx-propagation spec: resourceSelectors: - apiVersion: argoproj.io/v1alpha1 kind: Rollout name: nginx-rollout placement: replicaScheduling: replicaDivisionPreference: Weighted replicaSchedulingType: Divided clusterAffinity: clusterNames: - member1
Use the command kubectl karmada init --kube-image-registry=registry.aliyuncs.com/google_containers --kube-image-mirror-country=cn
to pull up a new environment, which can run normally.
I created ResourceInterpreterCustomization
to reproduce the above problem :
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
name: declarative-configuration-argo-rollout
spec:
target:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
customizations:
replicaResource:
luaScript: >
local kube = require("kube")
function GetReplicas(obj)
replica = obj.spec.replicas
requirement = kube.accuratePodRequirements(obj.spec.template)
return replica, requirement
end
replicaRevision:
luaScript: >
function ReviseReplica(obj, desiredReplica)
obj.spec.replicas = desiredReplica
return obj
end
retention:
luaScript: >
function Retain(desiredObj, observedObj)
desiredObj.spec.paused = observedObj.spec.paused
return desiredObj
end
statusAggregation:
luaScript: >
function AggregateStatus(desiredObj, statusItems)
if statusItems == nil then
return desiredObj
end
if desiredObj.status == nil then
desiredObj.status = {}
end
replicas = 0
availableReplicas = 0
readyReplicas = 0
updatedReadyReplicas = 0
updatedReplicas = 0
for i = 1, #statusItems do
if statusItems[i].status ~= nil and statusItems[i].status.replicas ~= nil then
replicas = replicas + statusItems[i].status.replicas
end
if statusItems[i].status ~= nil and statusItems[i].status.availableReplicas ~= nil then
availableReplicas = availableReplicas + statusItems[i].status.availableReplicas
end
if statusItems[i].status ~= nil and statusItems[i].status.readyReplicas ~= nil then
readyReplicas = readyReplicas + statusItems[i].status.readyReplicas
end
if statusItems[i].status ~= nil and statusItems[i].status.updatedReadyReplicas ~= nil then
updatedReadyReplicas = updatedReadyReplicas + statusItems[i].status.updatedReadyReplicas
end
if statusItems[i].status ~= nil and statusItems[i].status.updatedReplicas ~= nil then
updatedReplicas = updatedReplicas + statusItems[i].status.updatedReplicas
end
end
desiredObj.status.replicas = replicas
desiredObj.status.availableReplicas = availableReplicas
desiredObj.status.readyReplicas = readyReplicas
desiredObj.status.updatedReadyReplicas = updatedReadyReplicas
desiredObj.status.updatedReplicas = updatedReplicas
return desiredObj
end
statusReflection:
luaScript: >
function ReflectStatus (observedObj)
return observedObj.status
end
healthInterpretation:
luaScript: >
function InterpretHealth(observedObj)
return observedObj.status.readyReplicas == observedObj.spec.replicas
end
dependencyInterpretation:
luaScript: >
function GetDependencies(desiredObj)
dependentSas = {}
refs = {}
if desiredObj.spec.template.spec.serviceAccountName ~= '' and desiredObj.spec.template.spec.serviceAccountName ~= 'default' then
dependentSas[desiredObj.spec.template.spec.serviceAccountName] = true
end
local idx = 1
for key, value in pairs(dependentSas) do
dependObj = {}
dependObj.apiVersion = 'v1'
dependObj.kind = 'ServiceAccount'
dependObj.name = key
dependObj.namespace = desiredObj.metadata.namespace
refs[idx] = dependObj
idx = idx + 1
end
return refs
end
I reproduced it on my side with the following ResourceInterpreterCustomization
:
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
name: declarative-configuration-argo-rollout
spec:
target:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
customizations:
replicaResource:
luaScript: >
local kube = require("kube")
function GetReplicas(obj)
replica = obj.spec.replicas
requirement = kube.accuratePodRequirements(obj.spec.template)
return replica, requirement
end
replicaRevision:
luaScript: >
function ReviseReplica(obj, desiredReplica)
obj.spec.replicas = desiredReplica
return obj
end
cc @chaunceyjiang @XiShanYongYe-Chang @Poor12 @ikaven1024 Please take a look. You can follow the above steps to reproduce.
I guess the format of return obj is mutated by replicaRevision
.
pause
filed is deleted by:
https://github.com/karmada-io/karmada/blob/7e2097f1f3817c48add40b53375c7fa60fb7b8ad/pkg/resourceinterpreter/customized/declarative/luavm/lua_convert.go#L57-L63
This code is introduced in https://github.com/karmada-io/karmada/pull/2797. It seems not a perfect scheme.
I still can't reproduce with the
replicaScheduling
:apiVersion: policy.karmada.io/v1alpha1 kind: PropagationPolicy metadata: name: nginx-propagation spec: resourceSelectors: - apiVersion: argoproj.io/v1alpha1 kind: Rollout name: nginx-rollout placement: replicaScheduling: replicaDivisionPreference: Weighted replicaSchedulingType: Divided clusterAffinity: clusterNames: - member1
Use the command
kubectl karmada init --kube-image-registry=registry.aliyuncs.com/google_containers --kube-image-mirror-country=cn
to pull up a new environment, which can run normally.I created
ResourceInterpreterCustomization
to reproduce the above problem :apiVersion: config.karmada.io/v1alpha1 kind: ResourceInterpreterCustomization metadata: name: declarative-configuration-argo-rollout spec: target: apiVersion: argoproj.io/v1alpha1 kind: Rollout customizations: replicaResource: luaScript: > local kube = require("kube") function GetReplicas(obj) replica = obj.spec.replicas requirement = kube.accuratePodRequirements(obj.spec.template) return replica, requirement end replicaRevision: luaScript: > function ReviseReplica(obj, desiredReplica) obj.spec.replicas = desiredReplica return obj end retention: luaScript: > function Retain(desiredObj, observedObj) desiredObj.spec.paused = observedObj.spec.paused return desiredObj end statusAggregation: luaScript: > function AggregateStatus(desiredObj, statusItems) if statusItems == nil then return desiredObj end if desiredObj.status == nil then desiredObj.status = {} end replicas = 0 availableReplicas = 0 readyReplicas = 0 updatedReadyReplicas = 0 updatedReplicas = 0 for i = 1, #statusItems do if statusItems[i].status ~= nil and statusItems[i].status.replicas ~= nil then replicas = replicas + statusItems[i].status.replicas end if statusItems[i].status ~= nil and statusItems[i].status.availableReplicas ~= nil then availableReplicas = availableReplicas + statusItems[i].status.availableReplicas end if statusItems[i].status ~= nil and statusItems[i].status.readyReplicas ~= nil then readyReplicas = readyReplicas + statusItems[i].status.readyReplicas end if statusItems[i].status ~= nil and statusItems[i].status.updatedReadyReplicas ~= nil then updatedReadyReplicas = updatedReadyReplicas + statusItems[i].status.updatedReadyReplicas end if statusItems[i].status ~= nil and statusItems[i].status.updatedReplicas ~= nil then updatedReplicas = updatedReplicas + statusItems[i].status.updatedReplicas end end desiredObj.status.replicas = replicas desiredObj.status.availableReplicas = availableReplicas desiredObj.status.readyReplicas = readyReplicas desiredObj.status.updatedReadyReplicas = updatedReadyReplicas desiredObj.status.updatedReplicas = updatedReplicas return desiredObj end statusReflection: luaScript: > function ReflectStatus (observedObj) return observedObj.status end healthInterpretation: luaScript: > function InterpretHealth(observedObj) return observedObj.status.readyReplicas == observedObj.spec.replicas end dependencyInterpretation: luaScript: > function GetDependencies(desiredObj) dependentSas = {} refs = {} if desiredObj.spec.template.spec.serviceAccountName ~= '' and desiredObj.spec.template.spec.serviceAccountName ~= 'default' then dependentSas[desiredObj.spec.template.spec.serviceAccountName] = true end local idx = 1 for key, value in pairs(dependentSas) do dependObj = {} dependObj.apiVersion = 'v1' dependObj.kind = 'ServiceAccount' dependObj.name = key dependObj.namespace = desiredObj.metadata.namespace refs[idx] = dependObj idx = idx + 1 end return refs end
use this icr,error as: please help https://github.com/karmada-io/karmada/issues/4449
Seems @ikaven1024 has figured out the root cause(see the comments above) but doesn't have a solution yet.
@wxuedong As a workaround, you can you webhook instead of ResourceInterpreterCustomization
to implement the replicaRevision
hook.
In favor of #4656 /assign @chaosi-zju