os-frr: bgp routes not routable even though they are advertised
Important notices Before you add a new report, we ask you kindly to acknowledge the following:
- [x] I have read the contributing guide lines at https://github.com/opnsense/plugins/blob/master/CONTRIBUTING.md
- [x] I have searched the existing issues, open and closed, and I'm convinced that mine is new.
- [x] The title contains the plugin to which this issue belongs
Describe the bug
I have routes that are being received by the plugin successfully but they aren't route-able to the rest of the network. One thing that seems potentially wrong is the Internal flag is not set on the routes (as seen in screen), I'm not sure how to correct that.
All neighbors are set with next Next-Hop-Self and everything else is default.
traceroute 10.42.0.61
traceroute to 10.42.0.61 (10.42.0.61), 64 hops max, 40 byte packets
1 *traceroute: sendto: No route to host
To Reproduce Steps to reproduce the behavior:
- Enable and setup BGP neighbors
- Get routes from neighbors
- Try to route to announced routes
Expected behavior
- Access announced routes
Environment Software version used and hardware type if relevant. e.g.:
OPNsense 25.1.4_1 FRR: 1.44
https://docs.frrouting.org/en/latest/bgp.html#route-selection
Maybe you already have a locally attached route or static route in the routing table that prevents the bgp received route from being installed as it matches with higher priority.
Is there a way to check that?
Looking at the routes table, I don't see anything suspicious, the route in question is the same being reported by bgp.
Yea, I'm not sure how to proceed or test further. I think this may be related to the recent update. Any ideas on how to debug further? Its being advertised, and its in the route table. If I hit the nodes directly (the next hop ip) it works. If I hit the announced ip from one of the nodes thats running cilium it works. If I hit it from the lan it doesn't seem to ever get to the node. Traceroute says host is down.
I had this working previously the only difference I can tell is the internal flag no longer marked on the ipv4 routing table, i'm not sure what that corresponds with.
Im not sure here either. To see if theres a difference we would need the vtysh running-config from the state when it worked, and the state when it stopped to work.
Maybe somebody in the OPNsense forum can help, there are other BGP users around who might know more about troubleshooting this.
I think its something with l2 vs l3. I can access it through the vpn subnet. Disabling packet filtering had no effect.
Maybe try to give your LAN a different network than 10.42.0.0/24 so theres no overlap with what you want to do in bgp.
This issue has been automatically timed-out (after 180 days of inactivity).
For more information about the policies for this repository, please read https://github.com/opnsense/plugins/blob/master/CONTRIBUTING.md for further details.
If someone wants to step up and work on this issue, just let us know, so we can reopen the issue and assign an owner to it.