[BUG] Random behavior when login into the cluster
Running the test we can see this behavior across all platforms, the behavior is pretty random and can not be reproduce consistently.
When trying to login the command reports the server certificate is self signed so we need to add extra params to being able to login, but if we retry after some time we will be able to log in with the same command.
This is not always happening, as most of the times it directly is able to log in with the command from 1crc console --crededntials
PS C:\Users\crcqe> oc login -u kubeadmin -p XXXXXX https://api.crc.testing:6443
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): n
error: The server uses a certificate signed by unknown authority. You may need to use the --certificate-authority flag to provide the path to a certificate file for the certificate authority, or --insecure-skip-tls-verify to bypass the certificate check and use insecure connections.
PS C:\Users\crcqe> oc login -u kubeadmin -p XXXXXX https://api.crc.testing:6443
Login successful.
You have access to 65 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
Actually I'd expect the oc login connections to always complain about certificate signed by unknown authorities, I'm not sure why sometimes it's not complaining about it. You can dump the certificate chain with openssl s_client if you want to see if there are certificate changes between the working/non-working case.
Actually I'd expect the
oc loginconnections to always complain about certificate signed by unknown authorities
Well that is not the case, on e2e we use the output from crc console --credentials to log in, that output does not contain the flag for unknown authorities and it is working most of the times without any issue.
It is hard for me to check the info with openssl cause the issue it is not reproducible but pretty random and recently it is seen more often during e2e.
After spending some time, This is what happen when we start the OCP cluster using crc start
- We create the self signed crt/key
- Update the kubeconfig with that in user auth field and then make change to openshift cluster by updating same to
openshift-confignamespace's configmapadmin-kubeconfig-client-ca. - We don't change the root CA and it should be same for each cluster.
- Now if
Adding crc-admin and crc-developer contexts to kubeconfig...succeed which means in the kubeconfig we are updating the root CA as part of cluster's certificate-authority-data and token is created against crc-admin/crc-developer context.
Since we have certificate-authority-data in kubeconfig file, the following issue only occur
- if by any chance
KUBECONFIGenv changed after cluster is started which doesn't have cluster details
$ touch /tmp/config
$ export KUBECONFIG=/tmp/config
$ oc login -u kubeadmin -p gcow5-d5mEo-9GxhD-CvKBM https://api.crc.testing:6443
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n):
Or in case the certificate-authority-data mismatch with the server cert
$ oc config view --raw -o jsonpath="{.clusters[?(@.name=='api-crc-testing:6443')].cluster.certificate-authority-data}"| base64 -d - > ca.crt
$ openssl s_client -connect api.crc.testing:6443 | openssl x509 -out server.crt
$ openssl verify -CAfile ca.crt server.crt
server.crt: OK
Since we have certificate-authority-data in kubeconfig file
Oh interesting, I did not expect the kubeconfig context data to be used at all when oc login is used. It seems the CA data is picked up from there regardless?
Since we have certificate-authority-data in kubeconfig file
Oh interesting, I did not expect the kubeconfig context data to be used at all when
oc loginis used. It seems the CA data is picked up from there regardless?
@cfergeau I didn't find that info in the man page or in help section but https://github.com/openshift/oc/blob/master/pkg/cli/login/loginoptions.go#L236-L270 looks like as part of oc login kubeconfig context data is used.