ingress-nginx icon indicating copy to clipboard operation
ingress-nginx copied to clipboard

Modsecurity logs missing with chroot enabled

Open josecu08 opened this issue 2 years ago • 3 comments

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

NGINX Ingress controller
Release:       v1.2.0
Build:         a2514768cd282c41f39ab06bda17efefc4bd233a
nginx version: nginx/1.19.10

Kubernetes version (use kubectl version): Server Version: v1.21.9

Environment:

  • Cloud provider or hardware configuration: Azure AKS
  • Basic cluster related info:
    VERSION OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    v1.21.9 Ubuntu 18.04.6 LTS 5.4.0-1078-azure containerd://1.4.12+azure-3
    v1.21.9 Ubuntu 18.04.6 LTS 5.4.0-1078-azure containerd://1.4.12+azure-3
  • How was the ingress-nginx-controller installed:
    • If helm was used then please show output of helm ls -A | grep -i ingress

      NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
      ingress-nginx kube-system 18 2022-05-17 12:14:07.418379 +0200 CEST deployed ingress-nginx-4.1.1 1.2.0
    • If helm was used then please show output of helm -n <ingresscontrollernamepspace> get values <helmreleasename>

      Helm values

      controller:
          admissionWebhooks:
              patch:
              nodeSelector:
                  beta.kubernetes.io/os: linux
          config:
              enable-modsecurity: true
              enable-owasp-modsecurity-crs: true
              proxy-buffer-size: 16k
              proxy-read-timeout: "100"
              ssl-dh-param: kube-system/nginx-ingress-dhparam
          image:
              chroot: true
          lifecycle:
              preStop:
                  exec:
                      command:
                      - /bin/sh
                      - -c
                      - sleep 5; /usr/local/openresty/nginx/sbin/nginx -c /etc/nginx/nginx.conf
                      -s quit; while pgrep -x nginx; do sleep 1; done
          nodeSelector:
              beta.kubernetes.io/os: linux
          replicaCount: 2
          service:
              externalTrafficPolicy: Local
          terminationGracePeriodSeconds: 600
      defaultBackend:
          nodeSelector:
              beta.kubernetes.io/os: linux
      topologySpreadConstraints:
      - labelSelector: null
          matchLabels:
              app.kubernetes.io/instance: ingress-nginx-internal
          maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: ScheduleAnyway
      - labelSelector: null
          matchLabels:
              app.kubernetes.io/instance: ingress-nginx-internal
          maxSkew: 1
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: DoNotSchedule
      updateStrategy:
          rollingUpdate:
              maxUnavailable: 0
          type: RollingUpdate
      

What happened:

I’ve just enabled the chroot feature. However, with the feature enabled it seems that modsecurity stopped working. I can’t find any logs, I’ve checked at /var/log and /chroot/var/log, there is no modsec_audit.log anymore neither there are any files inside the audit folder.

What you expected to happen:

Modsecurity should work as usual with chroot feature enabled

How to reproduce it:

Install ingress-nginx on a kubernetes cluster using the helm values provided above (use chroot: false).

Exec into the container, you'll see /var/log/modsec_audit.log

Change configuration (use chroot: true)

Exec into the container, /var/log/modsec_audit.log doesn't exist. It doesn't exist on /chroot/var/log/modsec_audit.log either.

josecu08 avatar May 17 '22 12:05 josecu08

@josecu08: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar May 17 '22 12:05 k8s-ci-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 15 '22 12:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 14 '22 13:09 k8s-triage-robot

I can confirm the same happens to me.

Is there an ETA to get this fixed ?

bluemalkin avatar Oct 14 '22 01:10 bluemalkin

Same thing here.

luislhl avatar Oct 21 '22 21:10 luislhl

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Nov 20 '22 21:11 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 20 '22 21:11 k8s-ci-robot

Hi, this is still an important and unresolved issue. @josecu08 could you reopen it please?

artazar avatar Nov 22 '22 02:11 artazar

/reopen

josecu08 avatar Nov 22 '22 08:11 josecu08

@josecu08: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 22 '22 08:11 k8s-ci-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 22 '22 08:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 22 '22 08:12 k8s-ci-robot