srsRAN_Project
srsRAN_Project copied to clipboard
UEs Connected but can't reach the core
Issue Description
Hello, I connected 3 UEs to the gNB, now I was trying to ping the core through the UEs but I can't.
Setup Details
I have the O-RAN SC RIC and the 5G Core running on one machine and the UEs + gNB are running on another machine.
The PRACH detector will not meet the performance requirements with the configuration {Format 0, ZCZ 0, SCS 1.25kHz, Rx ports 1}. Lower PHY in executor blocking mode.
--== srsRAN gNB (commit 1483bda30) ==--
Connecting to AMF on 10.53.1.2:38412 Available radio types: zmq. Connecting to NearRT-RIC on 10.0.2.10:36421 Cell pci=1, bw=20 MHz, 1T1R, dl_arfcn=368500 (n3), dl_freq=1842.5 MHz, dl_ssb_arfcn=368410, ul_freq=1747.5 MHz
==== gNodeB started === Type
to view trace
The gNB can connect to the core and the RIC correctly. And all the UEs can connect to the gNB:
Opening 1 channels in RF device=zmq with args=tx_port=tcp://127.0.0.1:2101,rx_port=tcp://127.0.0.1:2100,base_srate=23.04e6 Supported RF device list: zmq file CHx base_srate=23.04e6 Current sample rate is 1.92 MHz with a base rate of 23.04 MHz (x12 decimation) CH0 rx_port=tcp://127.0.0.1:2100 CH0 tx_port=tcp://127.0.0.1:2101 Current sample rate is 23.04 MHz with a base rate of 23.04 MHz (x1 decimation) Current sample rate is 23.04 MHz with a base rate of 23.04 MHz (x1 decimation) Waiting PHY to initialize ... done! Attaching UE... Random Access Transmission: prach_occasion=0, preamble_index=55, ra-rnti=0x39, tti=174 Random Access Complete. c-rnti=0x4602, ta=0 RRC Connected PDU Session Establishment successful. IP: 10.45.1.2 RRC NR reconfiguration successful.
These are the routes of the UEs
sudo ip netns exec ue1 ip route show default via 10.45.1.1 dev tun_srsue 10.45.1.0/24 dev tun_srsue proto kernel scope link src 10.45.1.2
And these are the interfaces available on the machine with the gNB + UEs
marco@darlene-G5-KC:~/srsRAN_Project/build/apps/gnb$ ifconfig br-b84c28fd1d11: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.0.2.1 netmask 255.255.255.0 broadcast 10.0.2.255 ether 02:42:d0:92:33:65 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:bb:11:00:0a txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp8s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.42.0.130 netmask 255.255.255.0 broadcast 10.42.0.255 inet6 fe80::6721:e0e2:8fc7:ae4c prefixlen 64 scopeid 0x20 ether 80:fa:5b:96:c5:1d txqueuelen 1000 (Ethernet) RX packets 1473 bytes 280509 (280.5 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6332 bytes 539234 (539.2 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enx3a65b2cbdb60: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 3a:65:b2:cb:db:60 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback) RX packets 29082748 bytes 357893898753 (357.8 GB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 29082748 bytes 357893898753 (357.8 GB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 wlp7s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.43.78.214 netmask 255.255.240.0 broadcast 10.43.79.255 inet6 fe80::3654:27a4:b4e9:554f prefixlen 64 scopeid 0x20 ether 08:f8:bc:67:05:74 txqueuelen 1000 (Ethernet) RX packets 15188 bytes 16668777 (16.6 MB) RX errors 0 dropped 361 overruns 0 frame 0 TX packets 10411 bytes 2069441 (2.0 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Shouldn't be here a dev tun_srsue interface?
EDIT I added the ogstun interface on the machine with gNB + UEs: ogstun: flags=4241<UP,POINTOPOINT,NOARP,MULTICAST> mtu 1500 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
And I have the following route table on UE1: $ sudo ip netns exec ue1 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.45.1.1 0.0.0.0 UG 0 0 0 tun_srsue 10.45.1.0 0.0.0.0 255.255.255.0 U 0 0 0 tun_srsue
But I still can't reach the core $ sudo ip netns exec ue1 ping 10.45.1.1 PING 10.45.1.1 (10.45.1.1) 56(84) bytes of data. ^C --- 10.45.1.1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2043ms
Should I add a rule where the packet that hit the ogstun interface should be forward to the interface where is connected the computer with the core and the RIC?
Expected Behavior
While performing the sudo ip netns exec ue1 ping 10.45.1.1 command the UE was able to reach the core network.
Actual Behaviour
marco@darlene-G5-KC:~/Downloads$ sudo ip netns exec ue1 ping 10.45.1.1 PING 10.45.1.1 (10.45.1.1) 56(84) bytes of data. ^C --- 10.45.1.1 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3072ms
Do you run the dockerized version of the Open5gs or one installed manually?
Hello Piotr, I'm running the dockerized version available on srsRAN_Project/docker on the NuC. But I don't know if the UE ping is hitting the core.
could you show routing tables on the PC runnning gnb and open5gs container?
I'm executing tcpdump to capture the ICMP packets on the core machine. When I start the ping using the UE1 (running on a NuC) I capture the following packets: 13:03:32.452547 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136 13:03:32.452549 veth1eeab61 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136 13:03:33.478944 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136 13:03:33.478947 veth1eeab61 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136 13:03:34.473379 lo In IP localhost > localhost: ICMP localhost udp port 8805 unreachable, length 66 13:03:34.519852 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136 13:03:34.519854 veth1eeab61 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
Core computer:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.3.10.1 0.0.0.0 UG 600 0 0 wlp58s0
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-0ae98bbb2186
10.3.10.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp58s0
10.42.0.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
10.45.0.0 0.0.0.0 255.255.0.0 U 0 0 0 ogstun
10.53.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br-26141c6f72a5
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eno1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.19.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br-60b9fad393f0
gNB + UEs computer:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.42.0.1 0.0.0.0 UG 100 0 0 enp8s0f1
10.42.0.0 0.0.0.0 255.255.255.0 U 100 0 0 enp8s0f1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
ok, so you do not need to add ogstun
manually on the core network PC. Please remove it.
Instead, add the following route on the PC running open5gs container:
sudo ip ro add 10.45.0.0/16 via 10.53.1.2
Then, I am not sure how the gnb PC is able to reach AMF if it has no route to 10.53.1.2
(does it work?)
You need to add this route and probably enable IP forwarding on the gnb pc.
I have now the following routes on the core computer:
marco@marco-NUC7i7BNH:~/Desktop$ ip route show
default via 10.3.10.1 dev wlp58s0 proto dhcp metric 600
10.0.2.0/24 dev br-0ae98bbb2186 proto kernel scope link src 10.0.2.1
10.3.10.0/24 dev wlp58s0 proto kernel scope link src 10.3.10.118 metric 600
10.42.0.0/24 dev eno1 proto kernel scope link src 10.42.0.1 metric 100
10.45.0.0/16 via 10.53.1.2 dev br-26141c6f72a5
10.53.1.0/24 dev br-26141c6f72a5 proto kernel scope link src 10.53.1.1
169.254.0.0/16 dev eno1 scope link metric 1000
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.19.1.0/24 dev br-60b9fad393f0 proto kernel scope link src 172.19.1.1
But then, when I try to ping 10.45.1.1 using UE1 I got the following output:
$ sudo tcpdump -i any icmp
13:22:39.289176 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
13:22:39.289179 vethbcb83d5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
13:22:40.355509 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
13:22:40.355511 vethbcb83d5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
13:22:41.303954 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
About the gNB computer, since when I connect the gNB I have the following output:
$ sudo ./gnb -c gnb_zmq.yaml
The PRACH detector will not meet the performance requirements with the configuration {Format 0, ZCZ 0, SCS 1.25kHz, Rx ports 1}.
Lower PHY in executor blocking mode.
--== srsRAN gNB (commit 1483bda30) ==--
Connecting to AMF on 10.53.1.2:38412
Available radio types: zmq.
Connecting to NearRT-RIC on 10.0.2.10:36421
Cell pci=1, bw=20 MHz, 1T1R, dl_arfcn=368500 (n3), dl_freq=1842.5 MHz, dl_ssb_arfcn=368410, ul_freq=1747.5 MHz
==== gNodeB started ===
Type <t> to view trace
Isn't this because I can reach AMF using the AMF computer or do I need to specify a route to it?
hmm did you enable IP forwardind on the gnb PC?
sudo sysctl -w net.ipv4.ip_forward=1
sudo iptables -t nat -A POSTROUTING -o <IFNAME> -j MASQUERADE
I have done this on the gNB computer:
marco@darlene-G5-KC:~/srsRAN_4G/build/srsue/src$ sudo sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
marco@darlene-G5-KC:~/srsRAN_4G/build/srsue/src$ sudo iptables -t nat -A POSTROUTING -o wlp7s0 -j MASQUERADE
And now I have the following route table:
marco@darlene-G5-KC:~/srsRAN_4G/build/srsue/src$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.42.0.1 0.0.0.0 UG 100 0 0 enp8s0f1
0.0.0.0 10.3.10.1 0.0.0.0 UG 600 0 0 wlp7s0
10.3.10.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp7s0
10.42.0.0 0.0.0.0 255.255.255.0 U 100 0 0 enp8s0f1
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlp7s0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
The gNB and the UEs can connect but the ping still not working
marco@darlene-G5-KC:~/srsRAN_4G/build/srsue/src$ sudo ip netns exec ue1 ping 10.45.1.1
PING 10.45.1.1 (10.45.1.1) 56(84) bytes of data.
^C
--- 10.45.1.1 ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 9218ms
On the core computer while making the TCP Dump I still can check that I'm receiving something from the gNB computer when I start the ping on the UE1:
13:39:27.960045 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
13:39:27.960047 vethbcb83d5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
13:39:28.988822 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
13:39:28.988824 vethbcb83d5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
13:39:30.028386 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
13:39:30.028388 vethbcb83d5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 udp port 2152 unreachable, length 136
But I still without getting the ICMP reply on the UE1
which PC is 10.42.0.130?
could you share your gnb and UE config?
could you check if you can ping 10.53.1.2
from gnb pc?
marco@darlene-G5-KC:~/srsRAN_4G/build/srsue/src$ ping 10.53.1.2
PING 10.53.1.2 (10.53.1.2) 56(84) bytes of data.
64 bytes from 10.53.1.2: icmp_seq=1 ttl=63 time=0.596 ms
64 bytes from 10.53.1.2: icmp_seq=2 ttl=63 time=0.376 ms
64 bytes from 10.53.1.2: icmp_seq=3 ttl=63 time=0.797 ms
^C
--- 10.53.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2028ms
rtt min/avg/max/mdev = 0.376/0.589/0.797/0.171 ms
and if you can ping the gnb from the open5gs container?
docker exec -it open5gs_5gc bash
ping 10.53.1.1
Yes, I was also able to do it:
root@ec86af9c1326:/open5gs# ping 10.53.1.1
PING 10.53.1.1 (10.53.1.1) 56(84) bytes of data.
64 bytes from 10.53.1.1: icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from 10.53.1.1: icmp_seq=2 ttl=64 time=0.033 ms
64 bytes from 10.53.1.1: icmp_seq=3 ttl=64 time=0.055 ms
64 bytes from 10.53.1.1: icmp_seq=4 ttl=64 time=0.041 ms
64 bytes from 10.53.1.1: icmp_seq=5 ttl=64 time=0.034 ms
^C
--- 10.53.1.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4103ms
rtt min/avg/max/mdev = 0.033/0.048/0.079/0.017 ms
ah, my mistake, you need to check if you can ping the gnb (10.42.0.130) from open5gs container.
Ah ok, this one I'm not able to Ping
root@ec86af9c1326:/open5gs# ping 10.42.0.130
PING 10.42.0.130 (10.42.0.130) 56(84) bytes of data.
From 10.53.1.1 icmp_seq=1 Destination Port Unreachable
From 10.53.1.1 icmp_seq=2 Destination Port Unreachable
From 10.53.1.1 icmp_seq=3 Destination Port Unreachable
^C
--- 10.42.0.130 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2046ms
could you print the routing table inside the open5gs container?
you need to add a route to gnb over eth0 interface.
Sorry it didn't work
root@ec86af9c1326:/open5gs# ping 10.42.0.130
PING 10.42.0.130 (10.42.0.130) 56(84) bytes of data.
From 10.53.1.1 icmp_seq=1 Destination Port Unreachable
From 10.53.1.1 icmp_seq=2 Destination Port Unreachable
From 10.53.1.1 icmp_seq=3 Destination Port Unreachable
^C
--- 10.42.0.130 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2046ms
I also tried to add the IP (10.42.0.130) instead of the subnet and also tried to add a route over ogstun interface. Neither of those worked
could you show the routes you tried to add? and the routing table?
root@ec86af9c1326:/open5gs# ip route add 10.42.0.130 dev eth0
root@ec86af9c1326:/open5gs# ping 10.42.0.130
PING 10.42.0.130 (10.42.0.130) 56(84) bytes of data.
From 10.53.1.2 icmp_seq=1 Destination Host Unreachable
From 10.53.1.2 icmp_seq=2 Destination Host Unreachable
From 10.53.1.2 icmp_seq=3 Destination Host Unreachable
^C
--- 10.42.0.130 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3076ms
pipe 4
root@ec86af9c1326:/open5gs# ip route delete 10.42.0.130
root@ec86af9c1326:/open5gs# ip route add 10.42.0.130 dev ogstun
root@ec86af9c1326:/open5gs# ping 10.42.0.130
PING 10.42.0.130 (10.42.0.130) 56(84) bytes of data.
^C
--- 10.42.0.130 ping statistics ---
8 packets transmitted, 0 received, 100% packet loss, time 7156ms
root@ec86af9c1326:/open5gs# ip route delete 10.42.0.130
root@ec86af9c1326:/open5gs# ip route add 10.42.0.0/24 dev eth0
root@ec86af9c1326:/open5gs# ping 10.42.0.130
PING 10.42.0.130 (10.42.0.130) 56(84) bytes of data.
From 10.53.1.2 icmp_seq=1 Destination Host Unreachable
From 10.53.1.2 icmp_seq=2 Destination Host Unreachable
From 10.53.1.2 icmp_seq=3 Destination Host Unreachable
^C
--- 10.42.0.130 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3079ms
pipe 4
root@ec86af9c1326:/open5gs# ip route delete 10.42.0.0/24
root@ec86af9c1326:/open5gs# ip route add 10.42.0.0/24 dev ogstun
root@ec86af9c1326:/open5gs# ping 10.42.0.130
PING 10.42.0.130 (10.42.0.130) 56(84) bytes of data.
^C
--- 10.42.0.130 ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 5119ms
I was adding and removing this route, so the routing table was the same as the one that I'd sent you with this new temporary test route
could you try sudo ip ro add 10.42.0.0/24 via 10.53.1.1
on the gnb PC, please add sudo ip ro add 10.53.1.0/24 via 10.42.0.1
(but since 10.42.0.1
is the default gw, maybe this one is not needed)
I added the route in the gNB PC just to be sure:
marco@darlene-G5-KC:~/srsRAN_4G/build/srsue/src$ sudo ip ro add 10.53.1.0/24 via 10.42.0.1
[sudo] password for marco:
marco@darlene-G5-KC:~/srsRAN_4G/build/srsue/src$ ip ro
default via 10.42.0.1 dev enp8s0f1 proto dhcp metric 100
default via 10.3.10.1 dev wlp7s0 proto dhcp metric 600
10.3.10.0/24 dev wlp7s0 proto kernel scope link src 10.3.10.114 metric 600
10.42.0.0/24 dev enp8s0f1 proto kernel scope link src 10.42.0.130 metric 100
10.53.1.0/24 via 10.42.0.1 dev enp8s0f1
169.254.0.0/16 dev wlp7s0 scope link metric 1000
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
Then I tried to make the change on the Open5GS image:
root@ec86af9c1326:/open5gs# ip route delete 10.42.0.0/24
root@ec86af9c1326:/open5gs# ip ro add 10.42.0.0/24 via 10.53.1.1
root@ec86af9c1326:/open5gs# ping 10.42.0.130
PING 10.42.0.130 (10.42.0.130) 56(84) bytes of data.
From 10.53.1.1 icmp_seq=1 Destination Port Unreachable
From 10.53.1.1 icmp_seq=2 Destination Port Unreachable
From 10.53.1.1 icmp_seq=3 Destination Port Unreachable
^C
--- 10.42.0.130 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2028ms
root@ec86af9c1326:/open5gs# ping 10.42.0.130
PING 10.42.0.130 (10.42.0.130) 56(84) bytes of data.
From 10.53.1.1 icmp_seq=1 Destination Port Unreachable
From 10.53.1.1 icmp_seq=2 Destination Port Unreachable
From 10.53.1.1 icmp_seq=3 Destination Port Unreachable
^C
--- 10.42.0.130 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2030ms
root@ec86af9c1326:/open5gs# ip ro
default via 10.53.1.1 dev eth0
10.42.0.0/24 via 10.53.1.1 dev eth0
did you enable the IP forwarding on the pc running open5gs?
Yes. By doing this: sudo sysctl -w net.ipv4.ip_forward=1 sudo iptables -t nat -A POSTROUTING -s 10.45.0.0/16 ! -o ogstun -j MASQUERADE
hmm, but there is no ogstun interface on this PC, or?
Yes, you're right. Probably the command was just ignored:
I can show to you that on the computer where the core is running there is not OGSTUN interface:
marco@marco-NUC7i7BNH:~/Desktop$ ifconfig
br-0ae98bbb2186: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.1 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::42:1fff:fed2:a166 prefixlen 64 scopeid 0x20<link>
ether 02:42:1f:d2:a1:66 txqueuelen 0 (Ethernet)
RX packets 668 bytes 47528 (47.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 729 bytes 81325 (81.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
br-26141c6f72a5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.53.1.1 netmask 255.255.255.0 broadcast 10.53.1.255
inet6 fe80::42:73ff:fed5:8e7f prefixlen 64 scopeid 0x20<link>
ether 02:42:73:d5:8e:7f txqueuelen 0 (Ethernet)
RX packets 2051 bytes 195104 (195.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3607 bytes 348353 (348.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
br-60b9fad393f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.19.1.1 netmask 255.255.255.0 broadcast 172.19.1.255
inet6 fe80::42:3cff:fe75:58d5 prefixlen 64 scopeid 0x20<link>
ether 02:42:3c:75:58:d5 txqueuelen 0 (Ethernet)
RX packets 23 bytes 3420 (3.4 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 80 bytes 15289 (15.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:e3:11:05:38 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.42.0.1 netmask 255.255.255.0 broadcast 10.42.0.255
inet6 fe80::f2e9:74dd:fd:a8b8 prefixlen 64 scopeid 0x20<link>
ether 94:c6:91:a8:6d:4a txqueuelen 1000 (Ethernet)
RX packets 18200 bytes 2528860 (2.5 MB)
RX errors 0 dropped 10 overruns 0 frame 0
TX packets 17274 bytes 7237553 (7.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 16 memory 0xdc300000-dc320000
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 256095 bytes 35787497 (35.7 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 256095 bytes 35787497 (35.7 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth134695a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::70e9:31ff:fe15:fd1d prefixlen 64 scopeid 0x20<link>
ether 72:e9:31:15:fd:1d txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 63 bytes 6266 (6.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth2a94813: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::2c89:54ff:feb5:50b0 prefixlen 64 scopeid 0x20<link>
ether 2e:89:54:b5:50:b0 txqueuelen 0 (Ethernet)
RX packets 43196 bytes 9394806 (9.3 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 43276 bytes 6464275 (6.4 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth5fed9de: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::2494:72ff:fea4:3c9 prefixlen 64 scopeid 0x20<link>
ether 26:94:72:a4:03:c9 txqueuelen 0 (Ethernet)
RX packets 33 bytes 2179 (2.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 89 bytes 8354 (8.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth7edc6e3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::44b2:ff:fece:9fd1 prefixlen 64 scopeid 0x20<link>
ether 46:b2:00:ce:9f:d1 txqueuelen 0 (Ethernet)
RX packets 43932 bytes 7055226 (7.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 84089 bytes 9997522 (9.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethbba5402: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::6cc2:60ff:febe:2559 prefixlen 64 scopeid 0x20<link>
ether 6e:c2:60:be:25:59 txqueuelen 0 (Ethernet)
RX packets 1524 bytes 98484 (98.4 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1059 bytes 74327 (74.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethbcb83d5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::50d1:bbff:feb4:1ff0 prefixlen 64 scopeid 0x20<link>
ether 52:d1:bb:b4:1f:f0 txqueuelen 0 (Ethernet)
RX packets 1392 bytes 146188 (146.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2686 bytes 224528 (224.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethc6eefe6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::8c5b:fbff:fe3f:c8f4 prefixlen 64 scopeid 0x20<link>
ether 8e:5b:fb:3f:c8:f4 txqueuelen 0 (Ethernet)
RX packets 196 bytes 13772 (13.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 280 bytes 23722 (23.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethd5ed35e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::f8c5:d2ff:fe03:c4fa prefixlen 64 scopeid 0x20<link>
ether fa:c5:d2:03:c4:fa txqueuelen 0 (Ethernet)
RX packets 125381 bytes 16306674 (16.3 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 85855 bytes 16356985 (16.3 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethf3f140a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::f082:2bff:fe19:e724 prefixlen 64 scopeid 0x20<link>
ether f2:82:2b:19:e7:24 txqueuelen 0 (Ethernet)
RX packets 23 bytes 3742 (3.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 121 bytes 19769 (19.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlp58s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.3.10.118 netmask 255.255.255.0 broadcast 10.3.10.255
inet6 fe80::2aca:75c7:8117:658e prefixlen 64 scopeid 0x20<link>
ether 48:a4:72:d9:b6:ab txqueuelen 1000 (Ethernet)
RX packets 245264 bytes 224032903 (224.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 108072 bytes 48929782 (48.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
``
I also noticed that when I capture the packets on the core computer while trying to ping the gNB one:
root@ec86af9c1326:/open5gs# ping 10.42.0.130
PING 10.42.0.130 (10.42.0.130) 56(84) bytes of data.
From 10.53.1.1 icmp_seq=1 Destination Port Unreachable
From 10.53.1.1 icmp_seq=2 Destination Port Unreachable
From 10.53.1.1 icmp_seq=3 Destination Port Unreachable
From 10.53.1.1 icmp_seq=4 Destination Port Unreachable
From 10.53.1.1 icmp_seq=5 Destination Port Unreachable
From 10.53.1.1 icmp_seq=6 Destination Port Unreachable
From 10.53.1.1 icmp_seq=7 Destination Port Unreachable
From 10.53.1.1 icmp_seq=8 Destination Port Unreachable
From 10.53.1.1 icmp_seq=9 Destination Port Unreachable
From 10.53.1.1 icmp_seq=10 Destination Port Unreachable
18:00:28.797128 lo In IP localhost > localhost: ICMP localhost udp port 8805 unreachable, length 66
18:00:29.261438 vethbcb83d5 P IP 10.53.1.2 > 10.42.0.130: ICMP echo request, id 28, seq 4, length 64
18:00:29.261438 br-26141c6f72a5 In IP 10.53.1.2 > 10.42.0.130: ICMP echo request, id 28, seq 4, length 64
18:00:29.261476 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 protocol 1 port 45424 unreachable, length 92
18:00:29.261479 vethbcb83d5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 protocol 1 port 45424 unreachable, length 92
18:00:29.345951 lo In IP localhost > localhost: ICMP localhost udp port 8805 unreachable, length 66
18:00:30.285318 vethbcb83d5 P IP 10.53.1.2 > 10.42.0.130: ICMP echo request, id 28, seq 5, length 64
18:00:30.285318 br-26141c6f72a5 In IP 10.53.1.2 > 10.42.0.130: ICMP echo request, id 28, seq 5, length 64
18:00:30.285351 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 protocol 1 port 30226 unreachable, length 92
18:00:30.285353 vethbcb83d5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 protocol 1 port 30226 unreachable, length 92
18:00:31.299876 lo In IP localhost > localhost: ICMP localhost udp port 8805 unreachable, length 66
18:00:31.309534 vethbcb83d5 P IP 10.53.1.2 > 10.42.0.130: ICMP echo request, id 28, seq 6, length 64
18:00:31.309534 br-26141c6f72a5 In IP 10.53.1.2 > 10.42.0.130: ICMP echo request, id 28, seq 6, length 64
18:00:31.309578 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 protocol 1 port 58034 unreachable, length 92
18:00:31.309581 vethbcb83d5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 protocol 1 port 58034 unreachable, length 92
18:00:31.848625 lo In IP localhost > localhost: ICMP localhost udp port 8805 unreachable, length 66
18:00:32.333303 vethbcb83d5 P IP 10.53.1.2 > 10.42.0.130: ICMP echo request, id 28, seq 7, length 64
18:00:32.333303 br-26141c6f72a5 In IP 10.53.1.2 > 10.42.0.130: ICMP echo request, id 28, seq 7, length 64
18:00:32.333336 br-26141c6f72a5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 protocol 1 port 63316 unreachable, length 92
18:00:32.333338 vethbcb83d5 Out IP marco-NUC7i7BNH > 10.53.1.2: ICMP 10.42.0.130 protocol 1 port 63316 unreachable, length 92
that the packets are being sent to 10.42.0.130 by the 10.53.1.2 and shouldn't be any problem with this since the gNB computer can ping the IP 10.53.1.2