karmada
karmada copied to clipboard
How to create helm application in multi clusters
Please provide an in-depth description of the question you have: I want to create applications in multiple clusters with argocd and karmada .I try to deploy a grafana by chart,but it keep creating sercret for serviceaccount
grafana-token-xx9sq kubernetes.io/service-account-token 3 3s
grafana-token-z29d5 kubernetes.io/service-account-token 3 7s
grafana-token-zlq2m kubernetes.io/service-account-token 3 7s
grafana-token-zsrp7 kubernetes.io/service-account-token 3 9s
grafana-token-zxm25 kubernetes.io/service-account-token 3 14s
grafana-token-znxwf kubernetes.io/service-account-token 3 0s
grafana-token-zpvnh kubernetes.io/service-account-token 3 0s
grafana-token-zsx79 kubernetes.io/service-account-token 3 0s
grafana-token-s9mrp kubernetes.io/service-account-token 3 0s
grafana-token-xhdnd kubernetes.io/service-account-token 3 0s
grafana-token-9dst5 kubernetes.io/service-account-token 3 0s
grafana-token-6g2nc kubernetes.io/service-account-token 3 0s
grafana-token-vwgh6 kubernetes.io/service-account-token 3 0s
grafana-token-f9h5w kubernetes.io/service-account-token 3 0s
grafana-token-5m2dv kubernetes.io/service-account-token 3 0s
grafana-token-xtbp9 kubernetes.io/service-account-token 3 0s
grafana-token-6qbk8 kubernetes.io/service-account-token 3 0s
grafana-token-kr7gl kubernetes.io/service-account-token 3 0s
grafana-token-r7vp6 kubernetes.io/service-account-token 3 0s
grafana-token-9tmj2 kubernetes.io/service-account-token 3 0s
grafana-token-q2j9m kubernetes.io/service-account-token 3 0s
[root@node1 test]# kubectl get secrets | wc -l
4981
[root@node1 test]# kubectl get secrets | wc -l
4991
[root@node1 test]# kubectl get secrets | wc -l
5002
[root@node1 test]# kubectl get secrets | wc -l
5013
- I used the command to create it
argocd app create grafana --repo https://charts.bitnami.com/bitnami --helm-chart grafana --revision 8.1.1 --dest-namespace default --dest-name karmada-apiserver --helm-set service.type=NodePort
- apply the propagation
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: grafana
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: grafana
- apiVersion: v1
kind: Service
name: grafana
- apiVersion: v1
kind: PersistentVolumeClaim
name: grafana
- apiVersion: v1
kind: ConfigMap
name: grafana-envvars
- apiVersion: v1
kind: Secret
name: grafana-admin
- apiVersion: v1
kind: ServiceAccount
name: grafana
placement:
clusterAffinity:
clusterNames:
- member1
And I tried another way to verify the cause of the problem
- helm install bitnami/grafana --kubeconfig /etc/karmada/karmada-apiserver.config
- apply the propagation
But the results are the same,I think it's Karmada, not argocd caused it
What do you think about this question?:
How should I make the application correctly distributed. What's the problem with my method?
Environment:
- Karmada version: 1.2.0
- Kubernetes version: 1.22.9
- Others: grafana 8.1.1
I try to deploy a grafana by chart,but it keep creating sercret for serviceaccount
Where the secret is keeping create? Karmada-apiserver
or member cluster
?
In member cluster ,all resource has been successfully distributed , but the secret grafana-token-xxxxx
keep creating
got it. Thanks. I guess this issue is similar to #627.
@lts0609 I'd like to ask @Poor12 for help to figure out the root cause.
I reproduced this problem in my environment. I guess all your control plane and member clusters are under 1.24. Simply put, it is because the token bound on the control plane is not sent to the corresponding member cluster, so the token controller of the member cluster thinks that it does not have a corresponding token to automatically generate a token and update the secrets field of sa. But currently for serviceaccount, the strategy of retaining the objects of the control plane is adopted by Karmada, so that the modification of the member cluster does not take effect, and it falls into an infinite loop. The current evasion idea is to also distribute the token automatically generated by the control plane.
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
annotations:
meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
creationTimestamp: "2022-09-23T09:14:28Z"
labels:
app.kubernetes.io/instance: grafana
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
helm.sh/chart: grafana-8.0.0
name: grafana
namespace: default
resourceVersion: "653785"
uid: c0364717-981e-4e93-92f6-cfb7a5a741ff
secrets:
- name: grafana-admin
- name: grafana-token-qlt7v
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: grafana
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: grafana
- apiVersion: v1
kind: Service
name: grafana
- apiVersion: v1
kind: PersistentVolumeClaim
name: grafana
- apiVersion: v1
kind: ConfigMap
name: grafana-envvars
- apiVersion: v1
kind: Secret
name: grafana-admin
- apiVersion: v1
kind: ServiceAccount
name: grafana
- apiVersion: v1
kind: Secret
name: grafana-token-qlt7v
placement:
clusterAffinity:
clusterNames:
- member1
I think we should retain that in member clusters for serviceaccount.
@Poor12 I suppose minimal reproduce steps would be more helpful to understand the root cause. For example:
- Create a
ServiceAccount
to Karmada, ==> explains what would happen underlying in Karmada - Create a
PropagationPolicy
to Karmada ==> explains what would happen on the member cluster - explains what would happen in Karmada, such as retaining, and recreating stuff.
- Create a ServiceAccount to Karmada, token-controller in karmada control plane will auto generate a secret token for it.
- Create a PropagationPolicy to Karmada, serviceaccount will be created in member clusters. But there is no corresponding secret token in member cluster. So after checking the
secrets
field, token-controller in member cluster will create a secret token for it and update thesecrets
field of the serviceaccount. - However, for serviceaccount which already has
secrets
field,retain
by default does not work. So it will override that in member clusters. Therefore based on the above logic, token-controller continue to create secret tokens and falls into an infinite loop.
If I want to create an application in Karmada, How should I determine which resources need be set in PropagationPolicy,For example, Whether all resources displayed in ArgoCD UI need to be set?
For helm application, actually it has many resources in a helm chart. We recommend to use Flux to distribute the application in the whole package. You can based on https://karmada.io/docs/userguide/cicd/working-with-flux.
Now I have upgraded karmada to version 1.3.0, but sometimes the health status of PVC is always in Progressing , I see similar bug has been fixed in https://github.com/karmada-io/karmada/pull/2252 and https://github.com/karmada-io/karmada/pull/2241, Is there any problems here?
Now I have upgraded karmada to version 1.3.0, but sometimes the health status of PVC is always in Progressing
Can you open another issue to track this? I remember the #2070 which is included in v1.3.0 handled the PVC status issue.