crc
crc copied to clipboard
Services inaccessible on port 32131 when using userspace networking
General information
I've got a deployment + service that works ok on crc+openshift local running on Ubuntu 24.04, but the same setup on macOS results in an inaccessible service. I notice that CRC has modified my /etc/hosts file to point api.crc.testing to 127.0.0.1, which is unlike what it does on Ubuntu, maybe this is related.
Operating System
macOS
Hypervisor
KVM
Did you run crc setup before crc start?
yes
Running on
Laptop
Steps to reproduce
Install crc, eval crc oc-env, apply a deployment and service.
Try to access the port listed in oc get services via any network client, notice there is an error message.
CRC version
CRC version: 2.46.0+8f40e8
OpenShift version: 4.17.10
MicroShift version: 4.17.10
CRC status
CRC VM: Running
OpenShift: Running (v4.17.10)
RAM Usage: 29.51GB of 68.11GB
Disk Usage: 6.842GB of 10.95GB (Inside the CRC VM)
Cache Usage: 78.31GB
Cache Directory: /Users/user/.crc/cache
CRC config
- consent-telemetry : yes
- disk-size : 64
Host Operating System
ProductName: macOS
ProductVersion: 15.2
BuildVersion: 24C101
Expected behavior
Services are accessible via the port listed in oc get services, as they are on Linux hosts
Actual behavior
connect to 127.0.0.1 port <port listed in oc get services> failed: Connection refused
CRC Logs
Additional context
No response
For demonstration, this is the official kubernetes.io service modified to use a LoadBalancer:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/name: proxy
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: proxy
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: http-web-svc
I saved this to example.yaml and ran it with oc apply -f example.yaml.
Then after a minute or so, oc get pods shows:
nginx 1/1 Running 0 3m11s
and oc get services shows:
nginx-service LoadBalancer 10.217.5.127 <pending> 80:32131/TCP
then curl -v api.crc.testing:32131 prints out:
* Host api.crc.testing:32131 was resolved.
* IPv6: (none)
* IPv4: 127.0.0.1
* Trying 127.0.0.1:32131...
* connect to 127.0.0.1 port 32131 from 127.0.0.1 port 51181 failed: Connection refused
* Failed to connect to api.crc.testing port 32131 after 2 ms: Could not connect to server
* closing connection #0
curl: (7) Failed to connect to api.crc.testing port 32131 after 2 ms: Could not connect to server
Just on macOS. On Ubuntu, that last command would succeed though notably with a different IP than 127.0.0.1.
MacOS uses Userspace networking mode, which binds to localhost. Ubuntu does not do this yet, but will with the next release. This is to allow use while connected on a route-all VPN connection for Work-from-home users.
At the moment, we only forward application traffic via the web port. Haven't touched this part in a while, so will have to ask @anjannath or @praveenkumar.
It is not recommended, but the behavior can be changed:
$ crc config view
- network-mode : user
and change this value.
The only ports we expose/rebind are:
https://github.com/crc-org/crc/blob/ab8cf0bebe03dfca59600973979b0ffe6cb51991/pkg/crc/machine/vsock.go#L77-L85
Reaching the node IP would in an actual cluster might not be possible.
Instead of LoadBalancer service type can you use cluster type and then create route on that service? This route should be accessible from the host.
Instead of
LoadBalancerservice type can you use cluster type and then create route on that service? This route should be accessible from the host.
This worked, thanks.
Is there no way to get LoadBalancer to work as-is in OpenShift Local then?
Is there no way to get LoadBalancer to work as-is in OpenShift Local then?
OpenShift Local doesn't run any external loadbalancer so it will not work until you deploy manually something like https://docs.openshift.com/container-platform/4.17/hosted_control_planes/hcp-manage/hcp-manage-non-bm.html#hcp-bm-ingress_hcp-manage-non-bm one . Please close the issue if it is resolved for you.