ingress-nginx
ingress-nginx copied to clipboard
lua udp socket read timed out, context: ngx.timer on 1.7.1
I am using nginx ingress controller 1.71. Below is the yaml I am using to create ingress.
---
kind: Service
apiVersion: v1
metadata:
name: staging-static-assets
namespace: staging
spec:
type: ExternalName
externalName: "x-cdn.s3-website.amazonaws.com"
---
kind: Service
apiVersion: v1
metadata:
name: staging-maintenance-static-assets
namespace: staging
spec:
type: ExternalName
externalName: "y-cdn.s3-website.amazonaws.com"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: drive-ingress
namespace: staging
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "120"
nginx.ingress.kubernetes.io/proxy-send-timeout: "120"
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
nginx.ingress.kubernetes.io/proxy-body-size: 20m
spec:
rules:
- host: "*.staging.drivetrain.ai"
http: &http_rules
paths:
- path: /drive
pathType: Prefix
backend:
service:
name: drive-svc
port:
number: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cdn-ingress
namespace: staging
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/upstream-vhost: x-cdn.s3-website.amazonaws.com
spec:
rules:
- host: "*.staging.drivetrain.ai"
http: &asset_http_rules
paths:
- path: /login/callback
pathType: Prefix
backend:
service:
name: staging-static-assets
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: staging-static-assets
port:
number: 80
- host: "2.staging.drivetrain.ai"
http: *asset_http_rules
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: maintenance-cdn-ingress
namespace: staging
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/upstream-vhost: y-cdn.s3-website.amazonaws.com
spec:
rules:
- host: "*.staging.drivetrain.ai"
http:
paths:
- path: /maintenance
pathType: Prefix
backend:
service:
name: staging-maintenance-static-assets
port:
number: 80
I am getting lua udp timeout error every min and there is no detail on why it is happening. Need help to solve this issue.
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-kind bug
Plesae answer the questions asked in a new issue template because the information posted is not enough to understand or reproduce the problem.
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.
Any update on this?
We seem to be running into the same issue but on newer v1.8.1 code. We have 4 pods in this daemonset, 2 aren't busy and usually just show errors:
2024/02/03 18:56:54 [error] 28#28: *130137 lua udp socket read timed out, context: ngx.timer │
│ 2024/02/03 18:57:06 [error] 28#28: *130261 lua udp socket read timed out, context: ngx.timer
We seem to be running into the same issue but on newer
v1.8.1code. We have 4 pods in this daemonset, 2 aren't busy and usually just show errors:2024/02/03 18:56:54 [error] 28#28: *130137 lua udp socket read timed out, context: ngx.timer │ │ 2024/02/03 18:57:06 [error] 28#28: *130261 lua udp socket read timed out, context: ngx.timer
If you check the CoreDNS (or whatever DNS you use) log what does it say about the resolution of external name? In my case, it kept trying to append the search domain in the pod's resolve.conf to the actual external name so it becomes externalname.com.<search_domain>. It wouldn't affect the traffic because eventually the domain name will be successfully resolved after the DNS server goes through all possible combinations.
In the end, I managed to resolve it by specifying the DNS options in the nginx deployment manifest
template:
spec:
dnsPolicy: None
dnsConfig:
nameservers:
- your_name_server
searches:
- svc.cluster.local
- cluster.local
- ingress-nginx.svc.cluster.local
Hi there, we're using an Azure Kubernates cluster with an NGINX ingress controller. NGINX is configured to serve both internal service (pod) and external service (azure app service)
We're experiencing the same issue with no clue about resolution. Ingress logs and CoreDNS logs do not show anything relevant about it.
This issue goes away if we just remove the externalName service. Of course this is not acceptable.
We're getting simply crazy and need support on where to look around more details