ingress-nginx
ingress-nginx copied to clipboard
Modsecurity logs missing with chroot enabled
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
NGINX Ingress controller
Release: v1.2.0
Build: a2514768cd282c41f39ab06bda17efefc4bd233a
nginx version: nginx/1.19.10
Kubernetes version (use kubectl version
):
Server Version: v1.21.9
Environment:
- Cloud provider or hardware configuration: Azure AKS
-
Basic cluster related info:
VERSION OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME v1.21.9 Ubuntu 18.04.6 LTS 5.4.0-1078-azure containerd://1.4.12+azure-3 v1.21.9 Ubuntu 18.04.6 LTS 5.4.0-1078-azure containerd://1.4.12+azure-3 -
How was the ingress-nginx-controller installed:
-
If helm was used then please show output of
helm ls -A | grep -i ingress
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ingress-nginx kube-system 18 2022-05-17 12:14:07.418379 +0200 CEST deployed ingress-nginx-4.1.1 1.2.0 -
If helm was used then please show output of
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
Helm values
controller: admissionWebhooks: patch: nodeSelector: beta.kubernetes.io/os: linux config: enable-modsecurity: true enable-owasp-modsecurity-crs: true proxy-buffer-size: 16k proxy-read-timeout: "100" ssl-dh-param: kube-system/nginx-ingress-dhparam image: chroot: true lifecycle: preStop: exec: command: - /bin/sh - -c - sleep 5; /usr/local/openresty/nginx/sbin/nginx -c /etc/nginx/nginx.conf -s quit; while pgrep -x nginx; do sleep 1; done nodeSelector: beta.kubernetes.io/os: linux replicaCount: 2 service: externalTrafficPolicy: Local terminationGracePeriodSeconds: 600 defaultBackend: nodeSelector: beta.kubernetes.io/os: linux topologySpreadConstraints: - labelSelector: null matchLabels: app.kubernetes.io/instance: ingress-nginx-internal maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway - labelSelector: null matchLabels: app.kubernetes.io/instance: ingress-nginx-internal maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule updateStrategy: rollingUpdate: maxUnavailable: 0 type: RollingUpdate
-
What happened:
I’ve just enabled the chroot feature. However, with the feature enabled it seems that modsecurity stopped working. I can’t find any logs, I’ve checked at /var/log and /chroot/var/log, there is no modsec_audit.log anymore neither there are any files inside the audit folder.
What you expected to happen:
Modsecurity should work as usual with chroot feature enabled
How to reproduce it:
Install ingress-nginx on a kubernetes cluster using the helm values provided above (use chroot: false).
Exec into the container, you'll see /var/log/modsec_audit.log
Change configuration (use chroot: true)
Exec into the container, /var/log/modsec_audit.log doesn't exist. It doesn't exist on /chroot/var/log/modsec_audit.log either.
@josecu08: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I can confirm the same happens to me.
Is there an ETA to get this fixed ?
Same thing here.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hi, this is still an important and unresolved issue. @josecu08 could you reopen it please?
/reopen
@josecu08: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.