Deploying applications with exposed ports on clusters created via qovery demo command fails
Creating the issue for qovery-cli as the issue appeared with a demo cluster created via qovery demo up.
What happened:
I used the CLI to spin up a demo cluster to evaluate Qovery for our current tech landscape. I deployed applications with exposed ports, which should have been accessible on my local machine.
Instead, deployments of applications that had exposed ports always failed with:
β
Deployment of Application succeeded π Deployment of router z18248cec is starting
β Deployment of router failed but we rollbacked it to previous safe/running version !
[...]
What you expected to happen:
Applications get deployed successfully and attached URLs are accessible locally/routed to local cluster/ingress.
How to reproduce it (as minimally and precisely as possible):
qovery demo up- Deploy any application with public port.
- Failure should occurr.
Anything else we need to know?:
Found logs that gave me the hint to look at annotations. The command kubectl -n qovery logs deploy/ingress-nginx-controller --tail=200 yielded:
[...]
I0831 07:40:52.022193 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"qovery", Name:"ingress-nginx-controller-fb8f9f4d8-vttwk", UID:"e93ddf5c-4697-40d0-ad0f-ddbcc7cb6895", APIVersion:"v1", ResourceVersion:"703", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
E0831 09:10:13.235700 7 main.go:96] "invalid ingress configuration" err="annotation group ConfigurationSnippet contains risky annotation based on ingress configuration" ingress="ze97ee6d3-dev/router-z1f15735f-api"
E0831 09:25:51.817663 7 main.go:96] "invalid ingress configuration" err="annotation group ConfigurationSnippet contains risky annotation based on ingress configuration" ingress="ze97ee6d3-dev/router-z1f15735f-api"
configmap looked like this:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: qovery
meta.helm.sh/release-namespace: qovery
creationTimestamp: "2025-08-31T06:51:53Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: qovery
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.13.0
helm.sh/chart: ingress-nginx-4.13.0
name: ingress-nginx-controller
namespace: qovery
resourceVersion: "3344"
uid: 230d915d-be13-4d93-b58e-b01ea1751d22
How I temporarily fixed/patched it manually:
I am absolutely not familiar with k8s, but GPT5 helped me to get it to work. I added annotations-risk-level: Critical to the configmap to look like this:
apiVersion: v1
data:
allow-snippet-annotations: "true"
annotations-risk-level: Critical
[...]
Quoting GPT5 here:
Short version: with ingress-nginx β₯1.12, allow-snippet-annotations: "true" isnβt enough; you must also set annotations-risk-level: "Critical" or snippets still get blocked.
This made the deployments work again locally and applications were accessible locally. π
Given my lack of k8s/helm chart knowledge, I was hesitant proposing any fixes/changes to the helm chart, but this is probably the helm chart line, where it could be added: https://github.com/Qovery/qovery-chart/blob/bc738de2d373cfbdab8beb237920ee9c8bb7ad81/charts/qovery/charts/ingress-nginx/templates/controller-configmap.yaml#L17
Or the create_qovery_demo.sh could be extended to patch the configmap in place.
Environment:
- Qovery version (use
qovery version): 1.40.7 - OS and version: Darwin Kernel Version 24.6.0 / M4 Max / Sequoia 15.6.1
- Others:
- Docker version 28.3.2