origin
origin copied to clipboard
Build macos `oc` binary with CGO_ENABLED=1 so that it honours /etc/resolver?
Version
$ oc version
oc v4.0.0-0.171.0
kubernetes v1.12.4+a532756e37
Steps To Reproduce
- Create a cluster locally which doesn't have public dns lookup (you can use crc for it).
- Don't append the dns details to
/etc/resolve.conf
but use/etc/resolver/<whatever_dns>
, in our case we use/etc/resolver/testing
. - Try to open the web console on the browser which works and it honour the resolver file.
- Try to use oc cli to login to cluster, it doesn't work until you add that dns to
/etc/resolv.conf
Current Result
User need to add the dns details to /etc/resolv.conf
, where most of the app on the Mac does honour the resolver file and works.
Expected Result
Openshift client binary should also honour the resolver
directory files.
Additional Information
[try to run $ oc adm diagnostics
(or oadm diagnostics
) command if possible]
[if you are reporting issue related to builds, provide build logs with BUILD_LOGLEVEL=5
]
[consider attaching output of the $ oc get all -o json -n <namespace>
command to the issue]
[visit https://docs.openshift.org/latest/welcome/index.html]
https://github.com/golang/go/issues/12524
especially: https://github.com/golang/go/issues/12524#issuecomment-499525771 Enabling CGO
would seem to allow DNS lookups to happen using /etc/resolver
.
After building oc
with CGO_ENABLED=1
, I can confirm it will use /etc/resolver/
content for DNS resolution
$ cat /etc/resolv.conf
search cdg.redhat.com redhat.com win.redhat.com
nameserver 10.x...
nameserver 10.y...
$ cat /etc/resolver/testing
port 53
nameserver 192.168.64.8
$ oc config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://api.crc.testing:6443
name: crc
$ ~/go/bin/oc get nodes
NAME STATUS ROLES AGE VERSION
crc-fsjcf-master-0 Ready master,worker 11d v1.13.4+9252851b0
The binary from https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/macosx/oc.tar.gz
fails to resolve the DNS name:
$ oc get nodes
Unable to connect to the server: dial tcp: lookup api.crc.testing on 10.38.5.26:53: no such host
https://github.com/kubernetes/kubernetes/issues/23130 upstream issue from kubernetes side.
This seems to impact odo as well - does that need to be raised as a separate issue in their tracker?
Note this affects openshift-installer
and other binaries
@derekwaynecarr how to handle this? So far we haven't gotten anywhere with this and it affects us on macOS, and we find out the workaround with /etc/hosts
we have is a rat's nest, as it is not guaranteed to work due to permissions being modified outside of our control.
Does kubectl
have this same problem too?
I just tried it, and yes it behaves the same, it needs an entry in /etc/hosts and is not making use of /etc/resolver/
@cfergeau Should this be raised upstream? Seems like odo
should follow what kubectl
does.
Curious what @soltysh thinks
There is no impact to the software delivered, but rather inthe build process as Apple-branded hardware needs to be involved, although this can be solved by using CircleCI.
Curious what @soltysh thinks
If we want that fixed, this has to be fixed upstream first, imo.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/lifecycle frozen
This issue is solved in https://github.com/openshift/oc/issues/315
/close
@praveenkumar: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.