aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Websocket Error: Invalid HTTP upgrade", code: 404
Describe the bug We have deployed an ALB for our spingboot application which consists of both Rest and Websocket services hosted in EKS cluster. We have added health check, ssl redirect etc. annotations in the ingress.yaml file. The listener is HTTPS: 443 and the SSL certificate is from ACM.
When we hit the ALB endpoint, rest service is working fine but receiving the following error with the websocket service.
Websocket Optional(wss://xxx.xx.xx.com:443/xx/xx/xx/xx) disconnected with error Optional(Starscream.WSError(type: Starscream.ErrorType.upgradeError, message: "Invalid HTTP upgrade", code: 404))
- app.kubernetes.io/instance: aws-load-balancer-controller
- app.kubernetes.io/name: aws-load-balancer-controller
- app.kubernetes.io/version: v2.3.0
- helm.sh/chart: aws-load-balancer-controller-1.3.2
- Kubernetes version: 1.19
- Platform version: eks.7
Additional Context:
@rampn443, could you check if the comments from a prior issue https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1090#issuecomment-561842212 helps? If not, would you be able to share the ingress spec?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@rampn443, closing the issue since we did not hear back from you. For websocket, you need to ensure:
- your ingress has rule for the initial request path to the websocket
- you configure larger idle timeout (default is 60 seconds), or setup heartbeat from the application layer