ingress-nginx
ingress-nginx copied to clipboard
exec: Lua support was not enabled in v1.0.0 ingress-nginx version
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.0.0
Build: 041eb167c7bfccb1d1653f194924b0c5fd885e10
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.20.1
-------------------------------------------------------------------------------
Kubernetes version (use kubectl version
):
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T18:03:20Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.12", GitCommit:"4bf2e32bb2b9fdeea19ff7cdc1fb51fb295ec407", GitTreeState:"clean", BuildDate:"2021-10-27T17:07:18Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
Environment:
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
-
Kernel (e.g.
uname -a
):5.11.0-40-generic
-
Install tools:
-
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
AWS EKS
-
-
Basic cluster related info:
-
kubectl version
-
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T18:03:20Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.12", GitCommit:"4bf2e32bb2b9fdeea19ff7cdc1fb51fb295ec407", GitTreeState:"clean", BuildDate:"2021-10-27T17:07:18Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
-
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
qa1-k8s-actionnodes-1 Ready <none> 5d23h v1.20.12 10.135.30.33 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-actionnodes-2 Ready <none> 5d23h v1.20.12 10.135.30.34 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-master-1 Ready control-plane,master 17d v1.20.12 10.135.20.31 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-master-2 Ready control-plane,master 17d v1.20.12 10.135.20.240 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-master-3 Ready control-plane,master 17d v1.20.12 10.135.20.41 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-regular-1 Ready <none> 6d12h v1.20.12 10.135.20.181 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-regular-2 Ready <none> 6d12h v1.20.12 10.135.20.139 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-regular-3 Ready <none> 6d12h v1.20.12 10.135.20.213 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-regular-4 Ready <none> 6d12h v1.20.12 10.135.20.236 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-regular-5 Ready <none> 6d12h v1.20.12 10.135.20.46 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-regular-6 Ready <none> 6d12h v1.20.12 10.135.20.215 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-regular-7 Ready <none> 6d12h v1.20.12 10.135.20.77 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
qa1-k8s-regular-8 Ready <none> 6d12h v1.20.12 10.135.20.252 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.5
-
How was the ingress-nginx-controller installed:
- If helm was used then please show output of
helm ls -A | grep -i ingress
external ingress-nginx 3 2021-11-09 21:25:38.992412139 +0000 UTC deployed ingress-nginx-4.0.1 1.0.0
- If helm was used then please show output of
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
- If helm was used then please show output of
USER-SUPPLIED VALUES:
controller:
admissionWebhooks:
annotations: {}
certificate: /usr/local/certificates/cert
enabled: true
failurePolicy: Fail
key: /usr/local/certificates/key
namespaceSelector: {}
objectSelector: {}
patch:
enabled: true
image:
pullPolicy: IfNotPresent
nodeSelector: {}
podAnnotations: {}
priorityClassName: ""
runAsUser: 2000
tolerations: []
port: 8443
service:
annotations: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 443
type: ClusterIP
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- ingress-nginx
- key: app.kubernetes.io/instance
operator: In
values:
- ingress-nginx
- key: app.kubernetes.io/component
operator: In
values:
- controller
topologyKey: kubernetes.io/hostname
annotations: {}
autoscaling:
enabled: false
maxReplicas: 0
minReplicas: 0
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
autoscalingTemplate: []
config:
enable-modsecurity: true
main-snippet: |
env SMTP_USER;
env SMTP_PASSWORD;
env SMTP_EMAIL;
env SMTP_HOST;
env SMTP_PORT;
env SMTP_RCPT;
env CLUSTER_NAME;
env INGRESS_TYPE;
modsecurity-snippet: |
Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
SecDebugLog /var/log/modsec_debug.log
SecDebugLogLevel 3
SecRule HIGHEST_SEVERITY "@le 2" "chain, log, id:23, pass, phase:2"
SecRule REMOTE_ADDR "!@ipMatch 127.0.0.1" "setenv:SERVER=%{SERVER_ADDR}, setenv:REMOTEIP=%{REMOTE_ADDR}, setenv:REQUEST_URI=%{REQUEST_URI},setenv:ARGS=%{ARGS}, setenv:UNIQUEID=%{UNIQUE_ID}, setenv:REQUEST_LINE=%{REQUEST_LINE}, setenv:REQUEST_PROTOCOL=%{REQUEST_PROTOCOL}, setenv:TIME_DAY=%{TIME_DAY}, setenv:TIME=%{TIME}, setenv:TIME_MONTH=%{TIME_MON}, setenv:REQUEST_HEADERS=%{REQUEST_HEADERS}, setenv:REQUEST_BODY=%{REQUEST_BODY}, setenv:SEVERITY=%{HIGHEST_SEVERITY}, ctl:auditLogParts=+BEK, exec:/etc/nginx/modsec_alert.lua"
proxy-body-size: 10m
use-http2: false
use-proxy-protocol: true
configMapNamespace: ""
customTemplate:
configMapKey: ""
configMapName: ""
dnsPolicy: ClusterFirst
electionID: external-controller-leader
enableMimalloc: true
extraArgs: {}
extraContainers: []
extraEnvs: []
extraInitContainers: []
extraVolumeMounts: []
extraVolumes: []
healthCheckPath: /healthz
hostNetwork: true
hostPort:
enabled: false
ports:
http: 80
https: 443
image:
digest: ""
image: ingress-nginx/controller
registry: k8s.gcr.io
tag: v1.0.0
ingressClass: nginx-demo-external
ingressClassByName: true
ingressClassResource:
controllerValue: k8s.io/nginx-external
enabled: true
name: nginx-external
keda:
apiVersion: keda.sh/v1alpha1
behavior: {}
cooldownPeriod: 300
enabled: false
maxReplicas: 11
minReplicas: 1
pollingInterval: 30
restoreToOriginalReplicaCount: false
scaledObject:
annotations: {}
triggers: []
kind: Deployment
labels: {}
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
maxmindLicenseKey: ""
metrics:
enabled: true
port: 10254
prometheusRule:
additionalLabels: {}
enabled: false
rules: []
service:
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 10254
type: ClusterIP
serviceMonitor:
additionalLabels: {}
enabled: false
metricRelabelings: []
namespace: ""
namespaceSelector: {}
scrapeInterval: 30s
targetLabels: []
minAvailable: 1
minReadySeconds: 0
name: controller
nodeSelector:
kubernetes.io/os: linux
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
priorityClassName: ""
publishService:
enabled: true
pathOverride: ""
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
replicaCount: 1
reportNodeInternalIp: false
resources:
limits:
cpu: 300m
memory: 300Mi
requests:
cpu: 200m
memory: 200Mi
scope:
enabled: false
namespace: ""
service:
annotations:
loadbalancer.openstack.org/class: external
loadbalancer.openstack.org/proxy-protocol: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/openstack-internal-load-balancer: "false"
enableHttp: true
enableHttps: true
enabled: true
externalIPs: []
externalTrafficPolicy: Local
internal:
annotations: {}
enabled: false
loadBalancerSourceRanges: []
labels: {}
loadBalancerSourceRanges: []
ports:
http: 80
https: 443
targetPorts:
http: http
https: https
type: LoadBalancer
sysctls: {}
tcp:
annotations: {}
configMapNamespace: ""
terminationGracePeriodSeconds: 300
tolerations: []
topologySpreadConstraints: []
udp:
annotations: {}
configMapNamespace: ""
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
defaultBackend:
affinity: {}
autoscaling:
enabled: false
maxReplicas: 2
minReplicas: 1
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
enabled: true
extraArgs: {}
extraEnvs: []
extraVolumeMounts: []
extraVolumes: []
image:
allowPrivilegeEscalation: false
digest: ""
pullPolicy: IfNotPresent
readOnlyRootFilesystem: true
repository: my/applications/ingress-default-backend
tag: "0.4"
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
minAvailable: 1
name: defaultbackend
nodeSelector: {}
podAnnotations:
prometheus.io/path: /metrics
prometheus.io/port: "8080"
prometheus.io/scrape: "true"
podLabels: {}
podSecurityContext: {}
port: 8080
priorityClassName: ""
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
replicaCount: 1
resources:
limits:
cpu: 100m
memory: 60Mi
requests:
cpu: 50m
memory: 60Mi
service:
annotations: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 80
type: ClusterIP
serviceAccount:
automountServiceAccountToken: true
create: true
name: ""
tolerations: []
dhParam: LS0tLS1CRUdJTiBESCBQQVJBTU.........U0RGtMUURGY1B0SWROOE5KNTVRMEdDZTlBcEtMcTQzN0JYL3hDbjBKa1dzQ0FRST0KLS0tLS1FTkQgREggUEFSQU1FVEVSUy0tLS0tCg==
imagePullSecrets: []
podSecurityPolicy:
enabled: false
rbac:
create: true
scope: false
revisionHistoryLimit: 10
serviceAccount:
automountServiceAccountToken: true
create: true
name: ""
tcp: {}
udp: {}
-
If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
-
if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
-
Current State of the controller:
-
kubectl describe ingressclasses
-
kubectl -n <ingresscontrollernamespace> get all -A -o wide
-
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
-
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
-
-
Current state of ingress object, if applicable:
-
kubectl -n <appnnamespace> get all,ing -o wide
-
kubectl -n <appnamespace> describe ing <ingressname>
- If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
-
-
Others:
- Any other related information like ;
- copy/paste of the snippet (if applicable)
-
kubectl describe ...
of any custom configmap(s) created and in use - Any other related information that may help
- Any other related information like ;
What happened:
We had ingress-nginx version v0.34.1
and ModSecurity 2x version earlier with the below SecRule :
SecRule HIGHEST_SEVERITY "@le 2" "chain, log, id:23, pass, phase:2"
SecRule REMOTE_ADDR "!@ipMatch 127.0.0.1" "setenv:SERVER=%{SERVER_ADDR}, setenv:REMOTEIP=%{REMOTE_ADDR}, setenv:REQUEST_URI=%{REQUEST_URI},setenv:ARGS=%{ARGS}, setenv:UNIQUEID=%{UNIQUE_ID}, setenv:REQUEST_LINE=%{REQUEST_LINE}, setenv:REQUEST_PROTOCOL=%{REQUEST_PROTOCOL}, setenv:TIME_DAY=%{TIME_DAY}, setenv:TIME=%{TIME}, setenv:TIME_MONTH=%{TIME_MON}, setenv:REQUEST_HEADERS=%{REQUEST_HEADERS}, setenv:REQUEST_BODY=%{REQUEST_BODY}, setenv:SEVERITY=%{HIGHEST_SEVERITY}, ctl:auditLogParts=+BEK, exec:/etc/nginx/modsec_alert.lua"';
We were tying to execute lua script from SecRule , In version v0.34.1
the lua script execution was working fine
but when we upgraded nginx ingress version image to v1.0.0
of which have ModSecurity version 3.3.2
giving following error
2021/11/15 08:00:23 [emerg] 60#60: "modsecurity_rules" directive Rules error. File: <<reference missing or not informed>>. Line: 6. Column: 41. exec: Lua support was not enabled. in /tmp/nginx.conf:137
nginx: [emerg] "modsecurity_rules" directive Rules error. File: <<reference missing or not informed>>. Line: 6. Column: 41. exec: Lua support was not enabled. in /tmp/nginx.conf:137
What you expected to happen:
We expect that our nginx reload work fine when we adding our custom SecRule which was working fine on ingress-nginx version v0.34.1
Our SecRule :
SecRule HIGHEST_SEVERITY "@le 2" "chain, log, id:23, pass, phase:2"
SecRule REMOTE_ADDR "!@ipMatch 127.0.0.1" "setenv:SERVER=%{SERVER_ADDR}, setenv:REMOTEIP=%{REMOTE_ADDR}, setenv:REQUEST_URI=%{REQUEST_URI},setenv:ARGS=%{ARGS}, setenv:UNIQUEID=%{UNIQUE_ID}, setenv:REQUEST_LINE=%{REQUEST_LINE}, setenv:REQUEST_PROTOCOL=%{REQUEST_PROTOCOL}, setenv:TIME_DAY=%{TIME_DAY}, setenv:TIME=%{TIME}, setenv:TIME_MONTH=%{TIME_MON}, setenv:REQUEST_HEADERS=%{REQUEST_HEADERS}, setenv:REQUEST_BODY=%{REQUEST_BODY}, setenv:SEVERITY=%{HIGHEST_SEVERITY}, ctl:auditLogParts=+BEK, exec:/etc/nginx/modsec_alert.lua"';
How to reproduce it:
Anything else we need to know:
nginx.conf
> file we are applying on reload
# Configuration checksum: 7326337200184773476
# setup custom paths that do not require root access
pid /tmp/nginx.pid;
load_module /etc/nginx/modules/ngx_http_modsecurity_module.so;
daemon off;
worker_processes 1;
worker_rlimit_nofile 1047552;
worker_shutdown_timeout 240s ;
env SMTP_USER;
env SMTP_PASSWORD;
env SMTP_EMAIL;
env SMTP_HOST;
env SMTP_PORT;
env SMTP_RCPT;
env CLUSTER_NAME;
env INGRESS_TYPE;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
lua_package_path "/etc/nginx/lua/?.lua;;";
lua_shared_dict balancer_ewma 10M;
lua_shared_dict balancer_ewma_last_touched_at 10M;
lua_shared_dict balancer_ewma_locks 1M;
lua_shared_dict certificate_data 20M;
lua_shared_dict certificate_servers 5M;
lua_shared_dict configuration_data 20M;
lua_shared_dict global_throttle_cache 10M;
lua_shared_dict ocsp_response_cache 5M;
init_by_lua_block {
collectgarbage("collect")
-- init modules
local ok, res
ok, res = pcall(require, "lua_ingress")
if not ok then
error("require failed: " .. tostring(res))
else
lua_ingress = res
lua_ingress.set_config({
use_forwarded_headers = false,
use_proxy_protocol = true,
is_ssl_passthrough_enabled = false,
http_redirect_code = 308,
listen_ports = { ssl_proxy = "442", https = "443" },
hsts = true,
hsts_max_age = 15724800,
hsts_include_subdomains = true,
hsts_preload = false,
global_throttle = {
memcached = {
host = "", port = 11211, connect_timeout = 50, max_idle_timeout = 10000, pool_size = 50,
},
status_code = 429,
}
})
end
ok, res = pcall(require, "configuration")
if not ok then
error("require failed: " .. tostring(res))
else
configuration = res
configuration.prohibited_localhost_port = '10246'
end
ok, res = pcall(require, "balancer")
if not ok then
error("require failed: " .. tostring(res))
else
balancer = res
end
ok, res = pcall(require, "monitor")
if not ok then
error("require failed: " .. tostring(res))
else
monitor = res
end
ok, res = pcall(require, "certificate")
if not ok then
error("require failed: " .. tostring(res))
else
certificate = res
certificate.is_ocsp_stapling_enabled = false
end
ok, res = pcall(require, "plugins")
if not ok then
error("require failed: " .. tostring(res))
else
plugins = res
end
-- load all plugins that'll be used here
plugins.init({ })
}
init_worker_by_lua_block {
lua_ingress.init_worker()
balancer.init_worker()
monitor.init_worker(10000)
plugins.run()
}
real_ip_header proxy_protocol;
real_ip_recursive on;
set_real_ip_from 0.0.0.0/0;
modsecurity on;
modsecurity_rules '
Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
SecDebugLog /var/log/modsec_debug.log
SecDebugLogLevel 3
SecRule HIGHEST_SEVERITY "@le 2" "chain, log, id:23, pass, phase:2"
SecRule REMOTE_ADDR "!@ipMatch 127.0.0.1" "setenv:SERVER=%{SERVER_ADDR}, setenv:REMOTEIP=%{REMOTE_ADDR}, setenv:REQUEST_URI=%{REQUEST_URI},setenv:ARGS=%{ARGS}, setenv:UNIQUEID=%{UNIQUE_ID}, setenv:REQUEST_LINE=%{REQUEST_LINE}, setenv:REQUEST_PROTOCOL=%{REQUEST_PROTOCOL}, setenv:TIME_DAY=%{TIME_DAY}, setenv:TIME=%{TIME}, setenv:TIME_MONTH=%{TIME_MON}, setenv:REQUEST_HEADERS=%{REQUEST_HEADERS}, setenv:REQUEST_BODY=%{REQUEST_BODY}, setenv:SEVERITY=%{HIGHEST_SEVERITY}, ctl:auditLogParts=+BEK, exec:/etc/nginx/modsec_alert.lua"';
modsecurity_rules_file /etc/nginx/modsecurity/modsecurity.conf;
geoip_country /etc/nginx/geoip/GeoIP.dat;
geoip_city /etc/nginx/geoip/GeoLiteCity.dat;
geoip_org /etc/nginx/geoip/GeoIPASNum.dat;
geoip_proxy_recursive on;
aio threads;
aio_write on;
tcp_nopush on;
tcp_nodelay on;
log_subrequest on;
reset_timedout_connection on;
keepalive_timeout 75s;
keepalive_requests 100;
client_body_temp_path /tmp/client-body;
fastcgi_temp_path /tmp/fastcgi-temp;
proxy_temp_path /tmp/proxy-temp;
ajp_temp_path /tmp/ajp-temp;
client_header_buffer_size 1k;
client_header_timeout 60s;
large_client_header_buffers 4 8k;
client_body_buffer_size 8k;
client_body_timeout 60s;
http2_max_field_size 4k;
http2_max_header_size 16k;
http2_max_requests 1000;
http2_max_concurrent_streams 128;
types_hash_max_size 2048;
server_names_hash_max_size 1024;
server_names_hash_bucket_size 32;
map_hash_bucket_size 64;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
variables_hash_bucket_size 256;
variables_hash_max_size 2048;
underscores_in_headers off;
ignore_invalid_headers on;
limit_req_status 503;
limit_conn_status 503;
include /etc/nginx/mime.types;
default_type text/html;
# Custom headers for response
server_tokens off;
more_clear_headers Server;
# disable warnings
uninitialized_variable_warn off;
# Additional available variables:
# $namespace
# $ingress_name
# $service_name
# $service_port
log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';
map $request_uri $loggable {
default 1;
}
access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
error_log /var/log/nginx/error.log notice;
resolver 10.128.10.10 valid=30s;
# See https://www.nginx.com/blog/websocket-nginx
map $http_upgrade $connection_upgrade {
default upgrade;
# See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
'' '';
}
# Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
# If no such header is provided, it can provide a random value.
map $http_x_request_id $req_id {
default $http_x_request_id;
"" $request_id;
}
# Create a variable that contains the literal $ character.
# This works because the geo module will not resolve variables.
geo $literal_dollar {
default "$";
}
server_name_in_redirect off;
port_in_redirect off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_early_data off;
# turn on session caching to drastically improve performance
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 10m;
# allow configuring ssl session tickets
ssl_session_tickets off;
# slightly reduce the time-to-first-byte
ssl_buffer_size 4k;
# allow configuring custom ssl ciphers
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
# allow custom DH file http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam
ssl_dhparam /etc/ingress-controller/ssl/ingress-nginx-demo-external-ingress-nginx-controller.pem;
ssl_ecdh_curve auto;
# PEM sha: 6e7ad99a88f20feeb28e02aa67d95e78eb9dced5
ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;
proxy_ssl_session_reuse on;
upstream upstream_balancer {
### Attention!!!
#
# We no longer create "upstream" section for every backend.
# Backends are handled dynamically using Lua. If you would like to debug
# and see what backends ingress-nginx has in its memory you can
# install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
# Once you have the plugin you can use "kubectl ingress-nginx backends" command to
# inspect current backends.
#
###
server 0.0.0.1; # placeholder
balancer_by_lua_block {
balancer.balance()
}
keepalive 320;
keepalive_timeout 60s;
keepalive_requests 10000;
}
# Cache for internal auth checks
proxy_cache_path /tmp/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;
# Global filters
## start server _
server {
server_name _ ;
listen 80 proxy_protocol default_server reuseport backlog=4096 ;
listen [::]:80 proxy_protocol default_server reuseport backlog=4096 ;
listen 443 proxy_protocol default_server reuseport backlog=4096 ssl ;
listen [::]:443 proxy_protocol default_server reuseport backlog=4096 ssl ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "";
set $ingress_name "";
set $service_name "";
set $service_port "";
set $location_path "";
set $global_rate_limit_exceeding n;
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = false,
force_no_ssl_redirect = false,
preserve_trailing_slash = false,
use_port_in_redirects = false,
global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
})
balancer.rewrite()
plugins.run()
}
# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}
header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}
body_filter_by_lua_block {
plugins.run()
}
log_by_lua_block {
balancer.log()
monitor.call()
plugins.run()
}
access_log off;
port_in_redirect off;
set $balancer_ewma_score -1;
set $proxy_upstream_name "upstream-default-backend";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $proxy_protocol_server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
client_max_body_size 10m;
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Forwarded-Scheme $pass_access_scheme;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_max_temp_file_size 1024m;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
# health checks in cloud providers require the use of port 80
location /healthz {
access_log off;
return 200;
}
# this is required to avoid error if nginx is being monitored
# with an external software (like sysdig)
location /nginx_status {
allow 127.0.0.1;
allow ::1;
deny all;
access_log off;
stub_status on;
}
}
## end server _
# backend for when default-backend-service is not configured or it does not have endpoints
server {
listen 8181 default_server reuseport backlog=4096;
listen [::]:8181 default_server reuseport backlog=4096;
set $proxy_upstream_name "internal";
access_log off;
location / {
return 404;
}
}
# default server, used for NGINX healthcheck and access to nginx stats
server {
listen 127.0.0.1:10246;
set $proxy_upstream_name "internal";
keepalive_timeout 0;
gzip off;
access_log off;
location /healthz {
return 200;
}
location /is-dynamic-lb-initialized {
content_by_lua_block {
local configuration = require("configuration")
local backend_data = configuration.get_backends_data()
if not backend_data then
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
return
end
ngx.say("OK")
ngx.exit(ngx.HTTP_OK)
}
}
location /nginx_status {
stub_status on;
}
location /configuration {
client_max_body_size 21M;
client_body_buffer_size 21M;
proxy_buffering off;
content_by_lua_block {
configuration.call()
}
}
location / {
content_by_lua_block {
ngx.exit(ngx.HTTP_NOT_FOUND)
}
}
}
}
stream {
lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;";
lua_shared_dict tcp_udp_configuration_data 5M;
init_by_lua_block {
collectgarbage("collect")
-- init modules
local ok, res
ok, res = pcall(require, "configuration")
if not ok then
error("require failed: " .. tostring(res))
else
configuration = res
end
ok, res = pcall(require, "tcp_udp_configuration")
if not ok then
error("require failed: " .. tostring(res))
else
tcp_udp_configuration = res
tcp_udp_configuration.prohibited_localhost_port = '10246'
end
ok, res = pcall(require, "tcp_udp_balancer")
if not ok then
error("require failed: " .. tostring(res))
else
tcp_udp_balancer = res
end
}
init_worker_by_lua_block {
tcp_udp_balancer.init_worker()
}
lua_add_variable $proxy_upstream_name;
log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';
access_log /var/log/nginx/access.log log_stream ;
error_log /var/log/nginx/error.log notice;
upstream upstream_balancer {
server 0.0.0.1:1234; # placeholder
balancer_by_lua_block {
tcp_udp_balancer.balance()
}
}
server {
listen 127.0.0.1:10247;
access_log off;
content_by_lua_block {
tcp_udp_configuration.call()
}
}
}
/kind bug
@tanuj-soni2: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Can anyone please look in to this issue or we have any kind of update for that?
what algorithm did you run for the Lua? round_robin or ewma ?
/remove-kind bug
/remove-kind bug
sir, can you help, if the Lua is working with EWMA or round_robin?
@rthamrin I don't really understand why it couldn't work with either of the two.
But we use the default value which is round-robin
: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#load-balance
@rthamrin I don't really understand why it couldn't work with either of the two. But we use the default value which is
round-robin
: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#load-balance
why did you say both of them did not work?
@rthamrin there might have had a misunderstanding here. We do not use LUA for anything else but for WAF script. Regarding the load-balancer type, we use the default, being round-robin. Hope that answer your question.
@rthamrin there might have had a misunderstanding here. We do not use LUA for anything else but for WAF script. Regarding the load-balancer type, we use the default, being round-robin. Hope that answer your question.
thank you brother, it is such a short answer but a lot of meaning for me. but let me try elaborate to make myself understand here. why do I assume the ingress controller runs LUA / ewma.
first, A lot of links refer to this for the load-balancer method to implement ewma link
second, when we do exec -T in our ingress-controller pods. it shows.
> http {
> lua_package_path "/etc/nginx/lua/?.lua;;";
>
> lua_shared_dict balancer_ewma 10M;
> lua_shared_dict balancer_ewma_last_touched_at 10M;
> lua_shared_dict balancer_ewma_locks 1M;
> lua_shared_dict certificate_data 20M;
> lua_shared_dict certificate_servers 5M;
> lua_shared_dict configuration_data 20M;
> lua_shared_dict global_throttle_cache 10M;
> lua_shared_dict ocsp_response_cache 5M;
>
> init_by_lua_block {
> collectgarbage("collect")
>
> -- init modules
> local ok, res
>
> ok, res = pcall(require, "lua_ingress")
> if not ok then
> error("require failed: " .. tostring(res))
> else
> lua_ingress = res
> lua_ingress.set_config({
third, so basically, I do not need to address an annotation to nginx.ingress.kubernetes.io/load-balance: round_robin
in my ingress resource. because it drives me to use round_robin at the beginning.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: ingress
annotations:
nginx.ingress.kubernetes.io/load-balance: round_robin
spec:
ingressClassName: nginx
rules:
- host: service.com
fourth. If I wanna use the ewma I just add nginx.ingress.kubernetes.io/load-balance: ewma
sorry for the silly question, because I am kinda new to this thing, just want to do my project and understand the concept.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Can we have any workaround or any kind of update on this?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.