crc
crc copied to clipboard
[BUG] I can't use remote access
I instaled crc. it's working. I try setting used this guide https://access.redhat.com/documentation/en-us/red_hat_openshift_local/2.5/html/getting_started_guide/networking_gsg#setting-up-remote-server_gsg.
### I don't run haproxy. I don't know how instaled remote access. Help me, please. I don't know what to do. haproxy.cfg
global
log /dev/log local0
defaults
balance roundrobin
log global
maxconn 100
mode tcp
timeout connect 5s
timeout client 500s
timeout server 500s
listen apps
bind 172.16.6.66:80
server crcvm api.crc.testing:80
listen apps_ssl
bind 172.16.6.66:443
server crcvm api.crc.testing:443
listen api
bind 172.16.6.66:6443
server crcvm api.crc.testing:6443
Log run
journalctl -xe
Jul 29 02:09:54 dev-msa.neoflex.ru su[19230]: pam_unix(su:session): session opened for user root by root(uid=1000)
Jul 29 02:09:56 dev-msa.neoflex.ru sudo[19251]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/systemctl start haproxy.service
Jul 29 02:09:56 dev-msa.neoflex.ru sudo[19251]: pam_systemd(sudo:session): Cannot create session: Already running in a session or user slice
Jul 29 02:09:56 dev-msa.neoflex.ru sudo[19251]: pam_unix(sudo:session): session opened for user root by root(uid=0)
Jul 29 02:09:56 dev-msa.neoflex.ru systemd[1]: Starting HAProxy Load Balancer...
-- Subject: Unit haproxy.service has begun start-up
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit haproxy.service has begun starting up.
Jul 29 02:09:56 dev-msa.neoflex.ru haproxy[19257]: [ALERT] 209/020956 (19257) : Starting proxy apps: cannot bind socket [172.16.6.66:80]
Jul 29 02:09:56 dev-msa.neoflex.ru haproxy[19257]: [ALERT] 209/020956 (19257) : Starting proxy apps_ssl: cannot bind socket [172.16.6.66:443]
Jul 29 02:09:56 dev-msa.neoflex.ru haproxy[19257]: Proxy api started.
Jul 29 02:09:57 dev-msa.neoflex.ru systemd[1]: haproxy.service: Main process exited, code=exited, status=1/FAILURE
Jul 29 02:09:57 dev-msa.neoflex.ru sudo[19251]: pam_unix(sudo:session): session closed for user root
Jul 29 02:09:57 dev-msa.neoflex.ru systemd[1]: haproxy.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- The unit haproxy.service has entered the 'failed' state with result 'exit-code'.
Jul 29 02:09:57 dev-msa.neoflex.ru systemd[1]: Failed to start HAProxy Load Balancer.
-- Subject: Unit haproxy.service has failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit haproxy.service has failed.
--
-- The result is failed.
General information
- OS: Linux
- Hypervisor: KVM
- Did you run
crc setup
before starting it Yes - Running CRC on: Laptop
CRC version
CRC version: 2.6.0+d606e64
OpenShift version: 4.10.22
Podman version: 4.1.0
CRC status
DEBU CRC version: 2.6.0+d606e64
DEBU OpenShift version: 4.10.22
DEBU Podman version: 4.1.0
DEBU Running 'crc status'
DEBU Checking file: /home/dev/.crc/machines/crc/.crc-exist
DEBU Checking file: /home/dev/.crc/machines/crc/.crc-exist
DEBU Found binary path at /home/dev/.crc/bin/crc-driver-libvirt
DEBU Launching plugin server for driver libvirt
DEBU Plugin server listening at address 127.0.0.1:36285
DEBU () Calling .GetVersion
DEBU Using API Version 1
DEBU () Calling .SetConfigRaw
DEBU () Calling .GetMachineName
DEBU (crc) Calling .GetBundleName
DEBU (crc) Calling .GetState
DEBU (crc) DBG | time="2022-07-29T01:57:11+03:00" level=debug msg="Getting current state..."
DEBU (crc) DBG | time="2022-07-29T01:57:11+03:00" level=debug msg="Fetching VM..."
DEBU Running SSH command: df -B1 --output=size,used,target /sysroot | tail -1
DEBU Using ssh private keys: [/home/dev/.crc/machines/crc/id_ecdsa /home/dev/.crc/cache/crc_libvirt_4.10.22_amd64/id_ecdsa_crc]
DEBU SSH command results: err: <nil>, output: 32737570816 15663001600 /sysroot
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
CRC VM: Running
OpenShift: Running (v4.10.22)
Podman:
Disk Usage: 15.66GB of 32.74GB (Inside the CRC VM)
Cache Usage: 17.09GB
Cache Directory: /home/dev/.crc/cache
CRC config
crc config view
- consent-telemetry : no
- host-network-access : true
- network-mode : vsock
- skip-check-daemon-systemd-sockets : true
Host Operating System
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8
Steps to reproduce
- crc cleanup
- crc delete
- crc setup
- crc deamon
- crc start --log-level debug -p /home/dev/pull-secret.txt -n 8.8.8.8 -c 8 -m 16000 log start https://gist.github.com/Kalinins93/d7faa1eee1a350b3244204fcd7184540
-
Expected
Actual
Logs
Before gather the logs try following if that fix your issue
$ crc delete -f
$ crc cleanup
$ crc setup
$ crc start --log-level debug
posting the output of crc start --log-level debug
https://gist.github.com/Kalinins93/228899d702c00fcb3301ee8875b7f782
If you are using network-mode=vsock
, you cannot use haproxy, see https://github.com/code-ready/crc/issues/2667#issuecomment-915257243 for an alternative which should work:
firewall-cmd --zone=public --add-port=2222/tcp
firewall-cmd --zone=public --add-service=https
firewall-cmd --zone=public --add-service=http
If you are using
network-mode=vsock
, you cannot use haproxy, see #2667 (comment) for an alternative which should work:firewall-cmd --zone=public --add-port=2222/tcp firewall-cmd --zone=public --add-service=https firewall-cmd --zone=public --add-service=http
it don't work. This site can’t be reached
@Kalinins93 what is the actual error your are getting, what do you actually want to try out?
@Kalinins93 what is the actual error your are getting, what do you actually want to try out?
I turn off haproxy and add rules firewall-cmd --zone=public --add-port=2222/tcp firewall-cmd --zone=public --add-service=https firewall-cmd --zone=public --add-service I'm trying open web console. Url https://host-machinewithcrc:6443/console.after getting error This site can’t be reached. If I trying connect crc console than I get error.
@Kalinins93 6443
port is not opened from the firewalld, you should do that for api endpoints and console is listen to http
so hit should work.
@Kalinins93
6443
port is not opened from the firewalld, you should do that for api endpoints and console is listen tohttp
so hit should work.
Excuse me, i don't write what added this rule. What need that resolved my problem yet? What can i do?
@Kalinins93 So I got some time today to try out the remote access of crc in case user using the user-mode networking. Following steps works for me to access the console.
On the machine where crc is running make sure if firewall service is running the open the following port. In my case the machine IP is 192.168.122.138
.
<remote-host> $ sudo firewall-cmd --zone=public --add-service=https
<remote-host> $ sudo firewall-cmd --zone=public --add-service
On the client machine where you want to access the cluster just add following entry to /etc/hosts
192.168.122.138 api.crc.testing canary-openshift-ingress-canary.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing downloads-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing
Now open the browser and try http://console-openshift-console.apps-crc.testing and provide username/password to access the console.
To use the api server and interacting to cluster using oc
command we do need to do extra step on the remote machine because on that machine 6443
port is only bound to 127.0.0.1
which means it is only accessible to this machine.
<remote-host>$ sudo netstat -ntpl | grep crc
tcp 0 0 127.0.0.1:6443 0.0.0.0:* LISTEN 3338/crc
tcp 0 0 127.0.0.1:2222 0.0.0.0:* LISTEN 3338/crc
tcp6 0 0 :::80 :::* LISTEN 3338/crc
tcp6 0 0 :::443 :::* LISTEN 3338/crc
<remote-host> $ curl -X POST -d '{"local":"127.0.0.1:6443"}' --unix-socket ~/.crc/crc-http.sock http:/unix/network/services/forwarder/unexpose
<remote-host> $ curl -X POST -d '{"local":":6443","remote":"192.168.127.2:6443"}' --unix-socket ~/.crc/crc-http.sock http:/unix/network/services/forwarder/expose
<remote-host> $ sudo netstat -ntpl | grep crc
tcp 0 0 127.0.0.1:2222 0.0.0.0:* LISTEN 3338/crc
tcp6 0 0 :::6443 :::* LISTEN 3338/crc
tcp6 0 0 :::80 :::* LISTEN 3338/crc
tcp6 0 0 :::443 :::* LISTEN 3338/crc
<remote-host> $ sudo firewall-cmd --zone=public --add-port=6443/tcp
Now on the client machine you can interact with apiserver using oc command.
$ oc login -u developer api.crc.testing:6443
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Authentication required for https://api.crc.testing:6443 (openshift)
Username: developer
Password:
Login successful.
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
@Kalinins93 So I got some time today to try out the remote access of crc in case user using the user-mode networking. Following steps works for me to access the console.
On the machine where crc is running make sure if firewall service is running the open the following port. In my case the machine IP is
192.168.122.138
.<remote-host> $ sudo firewall-cmd --zone=public --add-service=https <remote-host> $ sudo firewall-cmd --zone=public --add-service
On the client machine where you want to access the cluster just add following entry to
/etc/hosts
192.168.122.138 api.crc.testing canary-openshift-ingress-canary.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing downloads-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing
Now open the browser and try http://console-openshift-console.apps-crc.testing and provide username/password to access the console.
To use the api server and interacting to cluster using
oc
command we do need to do extra step on the remote machine because on that machine6443
port is only bound to127.0.0.1
which means it is only accessible to this machine.<remote-host>$ sudo netstat -ntpl | grep crc tcp 0 0 127.0.0.1:6443 0.0.0.0:* LISTEN 3338/crc tcp 0 0 127.0.0.1:2222 0.0.0.0:* LISTEN 3338/crc tcp6 0 0 :::80 :::* LISTEN 3338/crc tcp6 0 0 :::443 :::* LISTEN 3338/crc <remote-host> $ curl -X POST -d '{"local":"127.0.0.1:6443"}' --unix-socket ~/.crc/crc-http.sock http:/unix/network/services/forwarder/unexpose <remote-host> $ curl -X POST -d '{"local":":6443","remote":"192.168.127.2:6443"}' --unix-socket ~/.crc/crc-http.sock http:/unix/network/services/forwarder/expose <remote-host> $ sudo netstat -ntpl | grep crc tcp 0 0 127.0.0.1:2222 0.0.0.0:* LISTEN 3338/crc tcp6 0 0 :::6443 :::* LISTEN 3338/crc tcp6 0 0 :::80 :::* LISTEN 3338/crc tcp6 0 0 :::443 :::* LISTEN 3338/crc <remote-host> $ sudo firewall-cmd --zone=public --add-port=6443/tcp
Now on the client machine you can interact with apiserver using oc command.
$ oc login -u developer api.crc.testing:6443 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y Authentication required for https://api.crc.testing:6443 (openshift) Username: developer Password: Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname>
My problem was with client host file on windows. After many tryinig, i destroy system and i can't setup and start crc. I install new server linux and install crc newly. Thanks a lot!!! I destroy crc and install with network-mode system. It all works with haproxy, but i can't get acccess to rest service port 8080. How does it forward traffic on crc?