rudr
rudr copied to clipboard
Auto scaler does not work with KEDA
(This is more likely a documentation issue)
Output of helm version:
version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.7", GitCommit:"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4", GitTreeState:"clean", BuildDate:"2019-12-13T18:46:24Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): AKS
Describe the bug
Autoscaler trait doc mentions that KEDA can be used as a HPA controller. This implies that there is some integration b/w Rudr and KEDA (but I don't think that's the case) - auto scaling does not work after installing KEDA, creating the Rudr component and app config (with auto scaler trait).
OAM yaml files used
Component
apiVersion: core.oam.dev/v1alpha1
kind: ComponentSchematic
metadata:
name: hpa-example-replicated-v1
spec:
workloadType: core.oam.dev/v1alpha1.Server
containers:
- name: server
image: k8s.gcr.io/hpa-example:latest
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
cpu:
required: 0.5
memory:
required: "128"
App config
apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
name: autoscaler-example
spec:
components:
- componentName: hpa-example-replicated-v1
instanceName: autoscaled-repsvc
parameterValues:
- name: poet
value: Eliot
- name: poem
value: The Wasteland
traits:
- name: auto-scaler
properties:
maximum: 6
minimum: 2
cpu: 10
memory: 10
What happened:
KEDA components were not created. Auto scaling does not work
What you expected to happen:
If Rudr integrates (or works) with KEDA, it should create ScaledObject and other KEDA resource for auto scaling to work
Relevant screenshots:
How to reproduce it (as minimally and precisely as possible):
Install KEDA (as per docs). Create component and app config (pasted above).
Anything else we need to know:
This is more likely a documentation issue? KEDA does not know take into a/c CPU etc. but the autoscale trait only accepts cpu
and memory
Did the hpa creation succeed? Try kubectl get hpa https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-the-horizontal-pod-autoscaler-work
I tried with the example here https://github.com/oam-dev/rudr/blob/ac239fcfd2056a9e593b90737a3c8a582b38346f/examples/autoscaler.yaml and it seems to work fine
Note that your cluster needs to be configured to autoscale
Did the hpa creation succeed? Try kubectl get hpa https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-the-horizontal-pod-autoscaler-work
No, the HPA was not created
I tried with the example here https://github.com/oam-dev/rudr/blob/ac239fcfd2056a9e593b90737a3c8a582b38346f/examples/autoscaler.yaml and it seems to work fine
Which autoscaler did you use?
Note that your cluster needs to be configured to autoscale
I did configure KEDA, as mentioned in the issue details
As mentioned in the issue details, I am not sure if Rudr is built to "work with KEDA" natively (I maybe be wrong so please correct me and point me to the right resources to understand better). Updating the documentation (reg Rudr and KEDA) to reflect this fact might make sense (this is not really a bug IMO).
As mentioned in the issue details, I am not sure if Rudr is built to "work with KEDA" natively (I maybe be wrong so please correct me and point me to the right resources to understand better). Updating the documentation (reg Rudr and KEDA) to reflect this fact might make sense (this is not really a bug IMO).
I think KEDA, just like rudr, pulls and process its custom resources on its own. And rudr queues KEDA CRs for KEDA controller to process with. I haven't checked the code but this should be the relationship between them two