manifests
manifests copied to clipboard
403 UAEX ext_authz_error
When accessing localhost:8080 after port-forwarding kubectl port-forward svc/istio-ingressgateway -n istio-system 8080:80
I get:
"GET / HTTP/1.1" 403 UAEX ext_authz_error - "-" 0 0 0 - ... *some browser info* ... error envoy envoy_bug envoy bug failure: !state_.local_complete_ || status == FilterHeadersStatus::StopIteration. Details: Filters should return FilterHeadersStatus::StopIteration after sending a local reply.
All pods are running.
Installed following this tutorial
Kubeflow 1.4 Kubernetes 1.21.9
I have the same problem too. HTTP ERROR 403 kubectl logs pod/authservice-0 -n istio-system:
time="2022-03-08T04:47:29Z" level=error msg="Failed to exchange authorization code with token: context canceled" ip=10.1.12.75 request="/login/oidc?code=vda6a7hh3boctggjljz47cdxp&state=MTY0NjcxNDgzMHxFd3dBRUVvM04yRllOM1JNUmtvNE5IRlpWVFk9fOsgmd3fWWYHG84r-BYlUA8jfLDx-S96ulgtziq8l715"
Im also running into the same issue @romanzdk @markqiu have you guys figured this out? Following the same installation mechanism as you guys. https://github.com/kubeflow/manifests#installation
I haven't.
hi @romanzdk thanks for replying quickly. I was able to figure out and fix the issue on my case. For me the authservice-0 pod in the istio-system namespace was in Pending state. That was a hint to me that something was not right.
I dug a little further and discovered that many of my pods did not have access to any Persistent Volume storage. Just describe any pods in the Pending state if any and you'll see similar messages stating that you'd have problems with Persistent Storage.
I reinstalled local path provisioner in my cluster v0.0.19 and everything works like a charm to me
Local Path Storage that I am using: https://github.com/rancher/local-path-provisioner/releases/tag/v0.0.19
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.19/deploy/local-path-storage.yaml
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
I also matched on GET / HTTP/1.1" 403 UAEX ext_authz_error
In my case po/authservice-0
looked healthy but further inspection showed it hadn't logged in hours. Restarting po/authservice-0
fixed the issue here
kubectl -n istio-system delete po -l app=authservice ; watch kubectl -n istio-system get po -l app=authservice
We got this unhelpful error message. We were able to find out the actual cause by increasing the istio ingress-gateway log level to trace (using istioctl). Then istio-ingressgateway displayed in the logs that the mTLS certificate between the ingress-gateway and extauthz service had expired. (The reason for the expiry is another story and irrelevant to the ext_authz_error. )
Facing the same issue on kubeflow v1.6.0, kubernetes v1.22 Tried deleting the pod/authservice-0, works after a redeployed, but couldn't get namespace. Then after awhile, couldn't login again. Is there any other solution?
/close Please Upgrade to a vanilla Kubeflow 1.8
@juliusvonkohout: Closing this issue.
In response to this:
/close Please Upgrade to a vanilla Kubeflow 1.8
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.