http-add-on icon indicating copy to clipboard operation
http-add-on copied to clipboard

HTTPScaledObject doesn't scale-up

Open ThuF opened this issue 1 year ago • 1 comments

Report

I have the following configuration that doesn't seems to scale-up the deployment:

kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
    name: codbex-hades
    namespace: prod
spec:
    hosts:
        - hades.eu1.codbex.com
    scalingMetric:
        concurrency:
            targetValue: 10
    scaleTargetRef:
        name: codbex-hades
        kind: Deployment
        apiVersion: apps/v1
        service: codbex-hades
        port: 80
    replicas:
        min: 1
        max: 3

Expected Behavior

I would expect that once I access my application and do more than 10 requests per second it would scale it up.

Actual Behavior

Nothing happens, the number of pods stays 1/1

Steps to Reproduce the Problem

Here you could find the whole deployment yaml:

kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
    name: codbex-hades
    namespace: prod
spec:
    hosts:
        - hades.eu1.codbex.com
    scalingMetric:
        concurrency:
            targetValue: 10
    scaleTargetRef:
        name: codbex-hades
        kind: Deployment
        apiVersion: apps/v1
        service: codbex-hades
        port: 80
    replicas:
        min: 1
        max: 3
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: codbex-hades
  namespace: prod
spec:
  replicas: 0
  selector:
    matchLabels:
      app: codbex-hades
  template:
    metadata:
      labels:
        app: codbex-hades
    spec:
      containers:
        - name: codbex-hades
          image: ghcr.io/codbex/codbex-hades:0.63.0
          imagePullPolicy: Always
          resources:
            requests:
              memory: "0.5Gi"
              cpu: "0.25"
            limits:
              memory: "1Gi"
              cpu: "0.5"
          ports:
            - name: http
              containerPort: 80
          env:
            - name: spring.profiles.active
              value: keycloak
            - name: DIRIGIBLE_KEYCLOAK_AUTH_SERVER_URL
              value: https://auth.eu1.codbex.com/auth/realms/platform
            - name: DIRIGIBLE_KEYCLOAK_CLIENT_ID
              value: hades
            - name: DIRIGIBLE_MULTI_TENANT_MODE
              value: "false"
            - name: DIRIGIBLE_TRIAL_ENABLED
              value: "true"
---
apiVersion: v1
kind: Service
metadata:
  name: codbex-hades
  namespace: prod
  labels:
    app: codbex-hades
spec:
  ports:
    - name: http
      port: 80
  type: ClusterIP
  selector:
    app: codbex-hades
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt
  name: codbex-hades
  namespace: prod
spec:
  ingressClassName: nginx
  rules:
    - host: hades.eu1.codbex.com
      http:
        paths:
          - backend:
              service:
                name: codbex-hades
                port:
                  number: 80
            path: /
            pathType: Prefix
  tls:
    - hosts:
        - hades.eu1.codbex.com
      secretName: codbex-hades-tls-secret

Logs from KEDA HTTP operator

No logs found once performing kubectl -n keda logs keda-operator-dd878ddf6-g28c5 -f

HTTP Add-on Version

0.8.0

Kubernetes Version

1.29

Platform

Amazon Web Services

Anything else?

No response

ThuF avatar Jun 02 '24 08:06 ThuF

Hello, Using concurrency you are setting the scaling based on the instant concurrent connections and not based on aggregations (seconds, minutes, etc...) I'd use this config:

kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
    name: codbex-hades
    namespace: prod
spec:
    hosts:
        - hades.eu1.codbex.com
    scalingMetric:
        requestRate:
            granularity: 1s
            targetValue: 10
            window: 1m
    scaleTargetRef:
        name: codbex-hades
        kind: Deployment
        apiVersion: apps/v1
        service: codbex-hades
        port: 80
    replicas:
        min: 1
        max: 3

JorTurFer avatar Jun 03 '24 06:06 JorTurFer

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Aug 02 '24 17:08 stale[bot]

This issue has been automatically closed due to inactivity.

stale[bot] avatar Aug 10 '24 13:08 stale[bot]