Issue with portforward on Mac
A user at KubeCon just reported to me that the port forward feature has never worked for him in Mac (he's upgraded to the latest version and still has problems). Headlamp was installed from homebrew.
Just tested this on M1 Max running macOS Sonoma, can confirm that port forwarding doesnt appear to work. Looking at the dev tools the POST request to http://localhost:4466/portforward returns a 500 with the message "key not found"
Payload of the call looks like this:
{
"cluster": "test-cluster",
"namespace": "default",
"pod": "api-6d8d794679-tzm6s",
"service": "",
"targetPort": "80",
"serviceNamespace": "default",
"id": "",
"address": "localhost"
}
Hii @andyjhall thank you for reporting this, can you please post the steps to reproduce this as i tried a brew install --cask headlamp and ran headlamp on my m1 but I am not able to reproduce the above
Originally I had installed manually but even using brew, the same issue persists, are there any other debug logs which would help?
Umm @andyjhall What does it show in the UI when you press the port forward button??
Same issue on my M3 Pro running macOS Sonoma. The request never get a response.
No idea if this was fixed as part of v0.24.0 but after updating this is now working for me.
Hi @andyjhall thanks for confirming it's fixed.
There was a separate MacOS related fix... that may have accidentally fixed this.
@torshind does the latest release also fix it for you?
Hi @andyjhall thanks for confirming it's fixed.
There was a separate MacOS related fix... that may have accidentally fixed this.
@torshind does the latest release also fix it for you?
Nothing changed for me
Still broken, any updates?
This issue is happening as frontend is not sending bearer token to backend when doing port forward.
There does seem to be some kind of auth error...
08:06:10.220 › server process stderr: {"level":"error","source":"/Users/runner/work/headlamp/headlamp/backend/pkg/portforward/handler.go","line":226,"error":"error upgrading connection: Unauthorized","time":"2024-06-26T08:06:10-07:00","message":"forwarding ports"}
I just noticed that port forwarding started working again. Not sure what is happening but there is a transient bug.
There does seem to be some kind of auth error...
08:06:10.220 › server process stderr: {"level":"error","source":"/Users/runner/work/headlamp/headlamp/backend/pkg/portforward/handler.go","line":226,"error":"error upgrading connection: Unauthorized","time":"2024-06-26T08:06:10-07:00","message":"forwarding ports"}
I'm noticing the same issue under Linux Ubuntu 24.04 when using the port forwarding button
server process stderr: {"level":"error","source":"/home/runner/work/headlamp/headlamp/backend/pkg/portforward/handler.go","line":226,"error":"error upgrading connection: Unauthorized","time":"2024-07-17T10:09:44+02:00","message":"forwarding ports"}
Hello Port-forwarding not working on windows as well on 0.24.1
First error :
[2024-07-17 10:41:30.557] [error] server process stderr: {"level":"error","cluster":"****","source":"D:/a/headlamp/headlamp/headlamp/backend/pkg/portforward/handler.go","line":145,"error":"key not found","time":"2024-07-17T10:41:30+02:00","message":"getting kubeconfig context"}
Then this one
[2024-07-17 10:43:11.147] [error] server process stderr: {"level":"info","context":"","clusterURL":"https://**.hcp.francecentral.azmk8s.io:443","source":"D:/a/headlamp/headlamp/headlamp/backend/pkg/kubeconfig/kubeconfig.go","line":172,"time":"2024-07-17T10:43:11+02:00","message":"Proxy setup"}
I finally discovered that the issue was not related to the OS but to the target cluster.
When the authentication is done without token (AWS SSO in my case), the request made to /portforward is setting an Authorization: Bearer undefined header instead of not setting it.
I created a PR to fix this, checking if the token is provided. If yes, set the Authorization header, if not, not setting it:
https://github.com/headlamp-k8s/headlamp/pull/2172
I wonder if this is fixed for other folks now?
We have a new 0.25.0 Headlamp release out with the fix from @LudovicTOURMAN
Let's close it since it looks like Ludovic fixed the issue for good. Great stuff!
Apparently this may still be happening and we have a new PR for it. Opening so the new PR tracks it.