fission
fission copied to clipboard
Error when installing 2 fissions in 2 different namespaces in the same k8s cluster
Fission/Kubernetes version
$ fission version client: fission/core: BuildDate: "2022-09-16T13:24:57Z" GitCommit: b36e0516 Version: v1.17.0 server: fission/core: BuildDate: "2022-09-16T13:24:57Z" GitCommit: b36e0516 Version: v1.17.0 $ kubectl version --short Client Version: v1.26.0 Server Version: v1.25.3
Kubernetes platform (e.g. Google Kubernetes Engine)
minikube version: v1.28.0
Describe the bug
I am trying to deploy 2 fission installations in 2 different namespaces on the same cluster. Once I finished the steps to deploy on the 2nd installation, some pods in the 1st namespace will crash. I am asking help if running 2 fissions in 2 diff namespaces in the same cluster is even a valid use case, or maybe my steps are executed wrong. See below in more details.
To Reproduce
Since I would like not to use the default fission namespace, after checking with the installation document, I used helm template to make a yaml file then apply this yaml file. I chose to use v1.17.0. Two namespaces respectively name as aaa
and bbb
. Here are my steps:
In one terminal session,
% export FISSION_NAMESPACE="aaa"
% kubectl create namespace $FISSION_NAMESPACE
namespace/aaa created
% kubectl create -k "github.com/fission/fission/crds/v1?ref=v1.17.0"
customresourcedefinition.apiextensions.k8s.io/canaryconfigs.fission.io created
customresourcedefinition.apiextensions.k8s.io/environments.fission.io created
customresourcedefinition.apiextensions.k8s.io/functions.fission.io created
customresourcedefinition.apiextensions.k8s.io/httptriggers.fission.io created
customresourcedefinition.apiextensions.k8s.io/kuberneteswatchtriggers.fission.io created
customresourcedefinition.apiextensions.k8s.io/messagequeuetriggers.fission.io created
customresourcedefinition.apiextensions.k8s.io/packages.fission.io created
customresourcedefinition.apiextensions.k8s.io/timetriggers.fission.io created
% helm repo add fission-charts https://fission.github.io/fission-charts/
"fission-charts" already exists with the same configuration, skipping
% helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "fission-charts" chart repository
Update Complete. ⎈Happy Helming!⎈
% helm template --version v1.17.0 --namespace $FISSION_NAMESPACE fission \
--set serviceType=NodePort,routerServiceType=NodePort \
fission-charts/fission-all > aaa.yaml
% kubectl config set-context --current --namespace=$FISSION_NAMESPACE
Context "minikube" modified.
% kubectl apply -f aaa.yaml
namespace/fission-function created
namespace/fission-builder created
serviceaccount/fission-svc created
serviceaccount/fission-fetcher created
serviceaccount/fission-builder created
configmap/feature-config created
persistentvolumeclaim/fission-storage-pvc created
clusterrole.rbac.authorization.k8s.io/fission-fission-cr-admin created
clusterrole.rbac.authorization.k8s.io/fission-secret-configmap-getter created
clusterrole.rbac.authorization.k8s.io/fission-package-getter created
clusterrolebinding.rbac.authorization.k8s.io/fission-fission-cr-admin created
role.rbac.authorization.k8s.io/fission-fission-fetcher created
role.rbac.authorization.k8s.io/fission-fission-builder created
role.rbac.authorization.k8s.io/fission-event-fetcher created
rolebinding.rbac.authorization.k8s.io/fission-fission-fetcher created
rolebinding.rbac.authorization.k8s.io/fission-fission-builder created
rolebinding.rbac.authorization.k8s.io/fission-fission-fetcher-pod-reader created
service/controller created
service/executor created
service/router created
service/storagesvc created
deployment.apps/buildermgr created
deployment.apps/controller created
deployment.apps/executor created
deployment.apps/kubewatcher created
deployment.apps/mqtrigger-keda created
deployment.apps/router created
deployment.apps/storagesvc created
deployment.apps/timer created
job.batch/fission-fission-all-v1.17.0 created
job.batch/fission-fission-all-v1.17.0-724 created
The Job "fission-fission-all-v1.17.0" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"fission-fission-all", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"fission-all", "controller-uid":"940d61e6-19b6-4f48-8066-838da0b4098d", "job-name":"fission-fission-all-v1.17.0", "release":"fission"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"post-upgrade-job", Image:"fission/reporter:v1.17.0", Command:[]string{"/reporter"}, Args:[]string{"event", "-c", "fission-use", "-a", "helm-post-upgrade", "-l", "fission-all-v1.17.0"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"GA_TRACKING_ID", Value:"UA-196546703-1", ValueFrom:(*core.EnvVarSource)(nil)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0x40088ee410), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"fission-svc", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0x4007bc0ab0), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil)}}: field is immutable
Everything seems to be okay, no pod failures.
In another terminal session,
% export FISSION_NAMESPACE="bbb"
% kubectl create namespace $FISSION_NAMESPACE
namespace/bbb created
# since fission CRD is already applied, thus I don't have to do it again now.
% helm template --version v1.17.0 --namespace $FISSION_NAMESPACE fission \
--set serviceType=NodePort,routerServiceType=NodePort \
fission-charts/fission-all > bbb.yaml
% kubectl config set-context --current --namespace=$FISSION_NAMESPACE
Context "minikube" modified.
# I then head into the bbb.yaml to change Service "controller" nodeport from 31313 to 31323; change Service "router" nodeport from 31314 to 31324 in order to avoid conflict with the fission in previous namespace "aaa".
% kubectl apply -f bbb.yaml
namespace/fission-function created
namespace/fission-builder created
serviceaccount/fission-svc created
serviceaccount/fission-fetcher created
serviceaccount/fission-builder created
configmap/feature-config created
persistentvolumeclaim/fission-storage-pvc created
clusterrole.rbac.authorization.k8s.io/fission-fission-cr-admin created
clusterrole.rbac.authorization.k8s.io/fission-secret-configmap-getter created
clusterrole.rbac.authorization.k8s.io/fission-package-getter created
clusterrolebinding.rbac.authorization.k8s.io/fission-fission-cr-admin created
role.rbac.authorization.k8s.io/fission-fission-fetcher created
role.rbac.authorization.k8s.io/fission-fission-builder created
role.rbac.authorization.k8s.io/fission-event-fetcher created
rolebinding.rbac.authorization.k8s.io/fission-fission-fetcher created
rolebinding.rbac.authorization.k8s.io/fission-fission-builder created
rolebinding.rbac.authorization.k8s.io/fission-fission-fetcher-pod-reader created
service/controller created
service/executor created
service/router created
service/storagesvc created
deployment.apps/buildermgr created
deployment.apps/controller created
deployment.apps/executor created
deployment.apps/kubewatcher created
deployment.apps/mqtrigger-keda created
deployment.apps/router created
deployment.apps/storagesvc created
deployment.apps/timer created
job.batch/fission-fission-all-v1.17.0 created
job.batch/fission-fission-all-v1.17.0-271 created
The Job "fission-fission-all-v1.17.0" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"fission-fission-all", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"fission-all", "controller-uid":"37c1451f-b0e5-4a64-9878-f1cd67b8fe02", "job-name":"fission-fission-all-v1.17.0", "release":"fission"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"post-upgrade-job", Image:"fission/reporter:v1.17.0", Command:[]string{"/reporter"}, Args:[]string{"event", "-c", "fission-use", "-a", "helm-post-upgrade", "-l", "fission-all-v1.17.0"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"GA_TRACKING_ID", Value:"UA-196546703-1", ValueFrom:(*core.EnvVarSource)(nil)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0x4001f73b40), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"fission-svc", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0x40062d9440), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil)}}: field is immutable
Expected result
I expect 2 fission installations can co-exist in different namespaces on the same cluster without any issue.
Actual result However, immediately I applied bbb.yaml (the fission yaml in namespace bbb), the fission pods in namespace "bbb" is fine, however, those same set of pods in namespace "aaa" seems to be "refreshed" and the following pods are failing in namespace "aaa". executor, kubewatcher, and timer
Screenshots/Dump file
In namespace aaa
only:
executor pod errors:
{"level":"info","ts":"2023-02-23T16:02:20.593Z","caller":"otel/provider.go:50","msg":"OTEL_EXPORTER_OTLP_ENDPOINT not set, skipping Opentelemtry tracing"}
{"level":"error","ts":"2023-02-23T16:02:50.698Z","caller":"fission-bundle/main.go:246","msg":"executor exited","error":"error waiting for CRDs: timeout waiting for CRDs","errorVerbose":"timeout waiting for CRDs\nerror waiting for CRDs\ngithub.com/fission/fission/pkg/executor.StartExecutor\n\tpkg/executor/executor.go:264\nmain.runExecutor\n\tcmd/fission-bundle/main.go:56\nmain.main\n\tcmd/fission-bundle/main.go:244\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/proc.go:250\nruntime.goexit\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/asm_arm64.s:1165","stacktrace":"main.main\n\tcmd/fission-bundle/main.go:246\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/proc.go:250"}
{"level":"error","ts":"2023-02-23T16:02:50.699Z","caller":"otel/provider.go:101","msg":"error shutting down trace provider","error":"failed to load span processors","stacktrace":"github.com/fission/fission/pkg/utils/otel.InitProvider.func1\n\tpkg/utils/otel/provider.go:101\nmain.main\n\tcmd/fission-bundle/main.go:247\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/proc.go:250"}
kubewatcher pod error:
{"level":"info","ts":"2023-02-23T16:01:55.632Z","caller":"otel/provider.go:50","msg":"OTEL_EXPORTER_OTLP_ENDPOINT not set, skipping Opentelemtry tracing"}
{"level":"error","ts":"2023-02-23T16:02:25.738Z","caller":"fission-bundle/main.go:254","msg":"kubewatcher exited","error":"error waiting for CRDs: timeout waiting for CRDs","errorVerbose":"timeout waiting for CRDs\nerror waiting for CRDs\ngithub.com/fission/fission/pkg/kubewatcher.Start\n\tpkg/kubewatcher/main.go:37\nmain.runKubeWatcher\n\tcmd/fission-bundle/main.go:60\nmain.main\n\tcmd/fission-bundle/main.go:252\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/proc.go:250\nruntime.goexit\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/asm_arm64.s:1165","stacktrace":"main.main\n\tcmd/fission-bundle/main.go:254\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/proc.go:250"}
{"level":"error","ts":"2023-02-23T16:02:25.739Z","caller":"otel/provider.go:101","msg":"error shutting down trace provider","error":"failed to load span processors","stacktrace":"github.com/fission/fission/pkg/utils/otel.InitProvider.func1\n\tpkg/utils/otel/provider.go:101\nmain.main\n\tcmd/fission-bundle/main.go:255\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/proc.go:250"}
timer pod erorr:
{"level":"info","ts":"2023-02-23T16:02:05.622Z","caller":"otel/provider.go:50","msg":"OTEL_EXPORTER_OTLP_ENDPOINT not set, skipping Opentelemtry tracing"}
{"level":"error","ts":"2023-02-23T16:02:35.727Z","caller":"fission-bundle/main.go:262","msg":"timer exited","error":"error waiting for CRDs: timeout waiting for CRDs","errorVerbose":"timeout waiting for CRDs\nerror waiting for CRDs\ngithub.com/fission/fission/pkg/timer.Start\n\tpkg/timer/main.go:37\nmain.runTimer\n\tcmd/fission-bundle/main.go:64\nmain.main\n\tcmd/fission-bundle/main.go:260\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/proc.go:250\nruntime.goexit\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/asm_arm64.s:1165","stacktrace":"main.main\n\tcmd/fission-bundle/main.go:262\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/proc.go:250"}
{"level":"error","ts":"2023-02-23T16:02:35.728Z","caller":"otel/provider.go:101","msg":"error shutting down trace provider","error":"failed to load span processors","stacktrace":"github.com/fission/fission/pkg/utils/otel.InitProvider.func1\n\tpkg/utils/otel/provider.go:101\nmain.main\n\tcmd/fission-bundle/main.go:263\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.0/x64/src/runtime/proc.go:250"}
The same pods in namespace bbb
are fine, giving an example in just executor pod in namespace "bbb", I see
{"level":"info","ts":"2023-02-23T15:45:47.527Z","caller":"otel/provider.go:50","msg":"OTEL_EXPORTER_OTLP_ENDPOINT not set, skipping Opentelemtry tracing"}
{"level":"warn","ts":"2023-02-23T15:45:47.535Z","caller":"executor/executor.go:281","msg":"Either configmap is not found or error reading data %v","error":"configmaps \"runtime-podspec-patch\" not found"}
{"level":"info","ts":"2023-02-23T15:45:47.535Z","caller":"executor/executor.go:285","msg":"Starting executor","instanceID":"aplgs8yw"}
{"level":"info","ts":"2023-02-23T15:45:47.535Z","logger":"generic_pool_manager.pool_pod_controller","caller":"poolmgr/poolpodcontroller.go:106","msg":"pool pod controller handlers registered"}
{"level":"info","ts":"2023-02-23T15:45:47.535Z","logger":"new_deploy","caller":"newdeploy/newdeploymgr.go:317","msg":"Newdeploy starts to clean orphaned resources","instanceID":"aplgs8yw"}
{"level":"info","ts":"2023-02-23T15:45:47.535Z","logger":"CaaF","caller":"container/containermgr.go:298","msg":"CaaF starts to clean orphaned resources","instanceID":"aplgs8yw"}
{"level":"info","ts":"2023-02-23T15:45:47.535Z","logger":"generic_pool_manager","caller":"poolmgr/gpm.go:443","msg":"Poolmanager starts to clean orphaned resources","instanceID":"aplgs8yw"}
W0223 15:45:47.536822 1 warnings.go:70] autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
W0223 15:45:47.536864 1 warnings.go:70] autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
{"level":"info","ts":"2023-02-23T15:45:47.539Z","logger":"executor","caller":"httpserver/server.go:17","msg":"starting server","service":"executor","addr":":8888"}
{"level":"info","ts":"2023-02-23T15:45:47.540Z","caller":"httpserver/server.go:17","msg":"starting server","service":"metrics","addr":":8080"}
{"level":"info","ts":"2023-02-23T15:45:47.640Z","logger":"generic_pool_manager.pool_pod_controller","caller":"poolmgr/poolpodcontroller.go:226","msg":"Waiting for informer caches to sync"}
{"level":"info","ts":"2023-02-23T15:45:47.640Z","logger":"generic_pool_manager.pool_pod_controller","caller":"poolmgr/poolpodcontroller.go:235","msg":"Started workers for poolPodController"}
Errors in the three failing pods are resulted from the same error, ie. waiting for CRDs. I am a bit confused that the CRD is already installed in the k8s cluster.
% kubectl get crd
NAME CREATED AT
canaryconfigs.fission.io 2023-02-23T15:15:37Z
environments.fission.io 2023-02-23T15:15:37Z
functions.fission.io 2023-02-23T15:15:38Z
httptriggers.fission.io 2023-02-23T15:15:38Z
kuberneteswatchtriggers.fission.io 2023-02-23T15:15:38Z
messagequeuetriggers.fission.io 2023-02-23T15:15:38Z
packages.fission.io 2023-02-23T15:15:38Z
timetriggers.fission.io 2023-02-23T15:15:38Z
$ fission support dump
Additional context
I have a feeling in theory this should work, but for some reason, the namespace is not being setup properly. The documentation does not provide much details on how to deploy multiple fissions in the same cluster, thus seeking help. Due to my deployment requirement, I can't use helm to do deployment.
We have support for this in 1.18 release
@sanketsudake similar behavior on v1.18.0 https://github.com/fission/fission/issues/2732
@sanketsudake I tried v1.18.0 and followed the steps mentioned in this ticket.
in namespace aaa executor
pod is still failing with same error
{"level":"info","ts":"2023-03-01T18:59:18.681Z","caller":"otel/provider.go:50","msg":"OTEL_EXPORTER_OTLP_ENDPOINT not set, skipping Opentelemtry tracing"}
{"level":"info","ts":"2023-03-01T18:59:18.681Z","caller":"crd/client.go:105","msg":"Waiting for CRDs to be installed"}
{"level":"error","ts":"2023-03-01T18:59:48.690Z","caller":"fission-bundle/main.go:272","msg":"executor exited","error":"error waiting for CRDs: timeout waiting for CRDs","errorVerbose":"timeout
waiting for CRDs\nerror waiting for CRDs\ngithub.com/fission/fission/pkg/executor.StartExecutor\n\tpkg/executor/executor.go:270\nmain.runExecutor\n\tcmd/fission-bundle/main.go:66\nmain.main\n\tc
md/fission-bundle/main.go:270\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.4/x64/src/runtime/proc.go:250\nruntime.goexit\n\t/opt/hostedtoolcache/go/1.19.4/x64/src/runtime/asm_amd64.s:1594","sta
cktrace":"main.main\n\tcmd/fission-bundle/main.go:272\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.4/x64/src/runtime/proc.go:250"}
NOTE: with v1.18.0 version, in namespace aaa, kubewatcher
and timer
pods seem fine and are not failing.
However in aaa namespace buildermgr
, mqtrigger-keda
, router
, timer
and kubewatcher
throwing the below error in the logs after 8-10 mins of running
% kubectl logs -n aaa kubewatcher-586f94c7d5-zz9wq
{"level":"info","ts":"2023-03-01T15:42:02.950Z","caller":"otel/provider.go:50","msg":"OTEL_EXPORTER_OTLP_ENDPOINT not set, skipping Opentelemtry tracing"}
{"level":"info","ts":"2023-03-01T15:42:02.951Z","caller":"crd/client.go:105","msg":"Waiting for CRDs to be installed"}
E0301 15:51:39.019946 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.KubernetesWatchTrigger: unknown (get kuberneteswatchtriggers.fission.io)
W0301 15:51:40.421945 1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1.KubernetesWatchTrigger: kuberneteswatchtriggers.fission.io is forbidden: User "system:serviceaccount:aaa:fission-kubewatcher" cannot list resource "kuberneteswatchtriggers" in API group "fission.io" in the namespace "default"
% kubectl get crds
NAME CREATED AT
canaryconfigs.fission.io 2023-03-01T12:15:19Z
environments.fission.io 2023-03-01T12:15:20Z
functions.fission.io 2023-03-01T12:15:20Z
httptriggers.fission.io 2023-03-01T12:15:20Z
kuberneteswatchtriggers.fission.io 2023-03-01T12:15:20Z
messagequeuetriggers.fission.io 2023-03-01T12:15:20Z
packages.fission.io 2023-03-01T12:15:20Z
timetriggers.fission.io 2023-03-01T12:15:20Z