Feature Request: support for custom devices/operating systems
Is your feature request related to a problem? Please describe.
This is a feature request for running a full NetBird client on not explicitly supported operating systems (Mikrotik's RouterOS).
The original issue https://github.com/netbirdio/netbird/issues/496 veered off-topic from supporting limited-capability devices towards the possibility of running NetBird on custom devices or operating systems, which by no means addresses the original purpose of running on low-powered/hardware spec devices in a limited capacity to reduce resource consumption.
Additional context see https://github.com/netbirdio/netbird/issues/496 for prior discussion.
https://github.com/netbirdio/netbird/issues/496#issuecomment-2933922673 The latest relevant post from @excavador outlines 3 implementation options for RouterOS: A) embedding RouterOS configuration library within NetBird B) providing config for and calling out to external programs to set up the operating system where needed (firewall, wireguard etc.) C) implement a (most likely Go) plugin system for achieving the same as option B
Hello!
Thank you so much for your reply!
I take a look to netbird source code, and it's clear how you build up abstraction on WireGuard/DNS/Firewall configuration I have understanding how to implement MikroTik API client to perform the same actions on Mikrotik side Pre-requisite - should be container on MikriTik in host network mode (to share NAT traversal port directly)
So, @mlsmaycon , my questiion the following: Option A. would you like to have support of RouterOS API directly in netbird client source code? (my be with some golang build tag) OR Option B. would you like to have some "external" kind of configuration, where netbird receive in configuration path to custom scripts? OR OptionC. would you like to have some "plugin" kind of configuration, where netbird load golang plugin of certain interface?
The difference the followings Option A. Simpler for user, for it will force to add all other devices, different from RouterOS, directly to the source code base Option B. Ideal for non-developers, system administrators. They will able to DIY integration with own router or very custom devices Option C. Ideal for developer, implement own plugin, load it, no injection to codebase
@mlsmaycon I could implement any approach, my question to you - what NetBird team prefer?
@nazarewk Hello!
Could you please clarify about any next steps?
Like NetBird team will discuss internally, or something else? I do not mind put my power to implementation, what I mind - make it "for nothing", i.e. if imlementation will not even accepted by the NetBird.
That's why I am asking clarification of the issue and my question is "how to move forward?" from current situtation
Generally, running a full NetBird client on a custom device won't help in terms of supporting low-spec devices, which most of the networking devices are. I am not trying to discourage you, but I am still not 100% sure this will solve your problem, as NetBird client can easily use upwards of 50 MB of RAM.
To give you an update: we do have some brief-but-ongoing discussions in our internal Slack. I am veering towards option B due to both involving the least maintenance and being the most attractive to sysops/IT departments who aren't the most proficient with coding. Part of the team likes the benefits outlined, but we haven't reached a consensus yet.
PS: The topic looks a lot like an extended variant of https://github.com/netbirdio/netbird/issues/3591 , which is another benefit and maybe could be tackled together.
Generally, running a full NetBird client on a custom device won't help in terms of supporting low-spec devices, which most of the networking devices are. I am not trying to discourage you, but I am still not 100% sure this will solve your problem, as NetBird client can easily use upwards of 50 MB of RAM.
This feature makes sense only for huge throughout of Wireguard (in case of my Mikrotik - it able to tackle 1Gbps, but due to container overhead take only 200-300 Mbps) and it means device will have 50 Mb of RAM
To give you an update: we do have some brief-but-ongoing discussions in our internal Slack. I am veering towards option B due to both involving the least maintenance and being the most attractive to sysops/IT departments who aren't the most proficient with coding. Part of the team likes the benefits outlined, but we haven't reached a consensus yet.
Clear
PS: The topic looks a lot like an extended variant of https://github.com/netbirdio/netbird/issues/3591 , which is another benefit and maybe could be tackled together.
Yes, agree
@nazarewk more considerations to your team
In case of "small hardware" this feature does not make any sense (and any solution okay), because the most likely people will put NetBird only for management commands It does not matter, do you have 500 Kbps or 10 Kbps for shell.
This feature itself about optimization of huge traffic, like side-to-side VPC connections (my case - hyrbid between self-hosted and Scaleway) In case when you have 200Mbps or 1Gbps traffic - it makes sense and important - you have huge device, you except huge traffic. Only in this case (huge device+huge traffic) the difference between "use wireguard from container" or "use wireguard from device" is significant and you definitely will have enough RAM to make netbird client up-and-running
@nazarewk even if device NOT able to manage netbird - does not matter! In this case I could put netbird to some host INSIDE network, and it will be device configuration problem and script problem on how to forward netbird requests to device.
The only single problem with "external" orchestration is dealing with port for NAT traversal - but long story short this is not a problem
My 2 cents
@nazarewk I promised on our call return back to you in around one week, so I am returning.
I have some additional context
- my initial test was done on Mikrotik 5xxx series which I temporary borrow and returned back
- my current company device Mikrotik RB4011iGS+RM does not support containers, but does support WireGuard
So, my current state
- I am using some additional device inside MikroTik network as NetBird VPN routing peer
What I want
- instead of hosting NetBird routing peer on this external device, use MikroTik WireGuard
How we could do that
Option 1: build up native package of Netbird for this device (arm32 / armv7l architecture + challenge on how to build up native application for MikroTik)
Option 2: build up "external configuration layer" where some external device with connection to MikroTik will handle it (the most likely it should be Option B with external custom scripts)
@nazarewk thank you so much for NetBird team to put finger on how to deal with Docker Container on MikroTik, but it is applicable for MikroTik 5xxx series, while MikroTik 4xxx series also widely adopted and in general the very solid device with native WireGuard support, and I would like to utilize it as NetBird routing peer)
@netbirddev how we could move forward in the light of extended context?
@nazarewk @netbirddev Hello guys!
I have updates.
Historical context
- I borrowed RB5xxx and install netbird client. It works and I found some limitation of the speed because of CPU throttle. I returned router back after that
- For our needs we bought RB4011. During configuration I did not observe container commands and decided that router does not support container
Recent updates
- I figure out, that RB4011 actually SUPPORTS containers - you need to activate it! RB5xxxx borrowed before was configured already and I do not know if it is activated by default or owner did it explicitly
The long storey short netbird works in container as Network Routing Peer on RB4011 🎉
I performed some benchmark and hit limit around 100 Mbps. After digging I find out the following
- Netbird connection from/to RB4011 established in "Relay mode", not in P2P mode
- Netbird installed on any host connected to RB4001 by ethernet cable also "Relay mode", not in P2P mode
- If device connected to Home Provider network directly, Netbird establish P2P connection, not "Relay mode"
So, my current problem
- I did not figure out yet, how to configure RB4011 with RouterOS v7 to make NAT traversal works :(
- Without this configuation I am not able to properly benchmark VPN connection, because I limited by Relay Server, not by Netbird/Container on RB4011 limits :(
Other words - without workable P2P mode for Netbird (workable NAT traversal on MikroTik RB4011) my benchmark bottleneck is Relay server, not MikoTik :(
What we could do
- I could write request to NetBird support to request help in configuration of MikroTik RB4011 to have workable NAT traveral (even WITHOUT netbird on MikroTik is critical point to achieve better performance and avoid load of NetBird Relay Servers)
- You could extend your amazing aricle NetBird client on MikroTik router by information on how to confiure MiktoTik to have workable NAT traversal
Again, I checked the following
Case A
- my laptop + mobile in USB terifiring mode - another internet connection (Odido Mobile)
- some server connected to home ISP (Odido Fiber)
- connection established in P2P mode
darwin-aarch64-1.truvity.internal:
NetBird IP: 100.97.84.234
Public key: DOXmlovGWN4RahgXIVL47ekCrdoSbxIkajZRQmVY9Hw=
Status: Connected
-- detail --
Connection type: P2P
ICE candidate (Local/Remote): srflx/host
ICE candidate endpoints (Local/Remote): 143.177.126.54:40808/192.168.1.142:51820
Relay server address: rels://streamline-de-fra1-0.relay.netbird.io:443
Last connection update: 8 seconds ago
Last WireGuard handshake: 8 seconds ago
Transfer status (received/sent) 92 B/180 B
Quantum resistance: false
Networks: -
Latency: 0s
Case B
- my laptop + mobile in USB terifiring mode - another internet connection (Odido Mobile)
- some server connected MikroTIK RB4011, MikroTik RB4011 connected to home ISP (Odido Fiber)
- connection established in Relay mode
➜ netbird status -d | grep darwin-aarch64 -A 15
darwin-aarch64-1.truvity.internal:
NetBird IP: 100.97.84.234
Public key: DOXmlovGWN4RahgXIVL47ekCrdoSbxIkajZRQmVY9Hw=
Status: Connected
-- detail --
Connection type: Relayed
ICE candidate (Local/Remote): -/-
ICE candidate endpoints (Local/Remote): -/-
Relay server address: rels://streamline-de-fra1-1.relay.netbird.io:443
Last connection update: 6 seconds ago
Last WireGuard handshake: 25 seconds ago
Transfer status (received/sent) 344 B/392 B
Quantum resistance: false
Networks: -
Latency: 336.893906ms
Case C
- my laptop connected to home ISP (Odido Fiber)
- some server connected to home ISP (Odido Fiber)
- connection established in P2P mode
➜ netbird status -d | grep darwin-aarch64 -A 15
darwin-aarch64-1.truvity.internal:
NetBird IP: 100.97.84.234
Public key: DOXmlovGWN4RahgXIVL47ekCrdoSbxIkajZRQmVY9Hw=
Status: Connected
-- detail --
Connection type: P2P
ICE candidate (Local/Remote): host/host
ICE candidate endpoints (Local/Remote): 192.168.1.145:51820/192.168.1.142:51820
Relay server address: rels://streamline-de-fra1-1.relay.netbird.io:443
Last connection update: 21 seconds ago
Last WireGuard handshake: 22 seconds ago
Transfer status (received/sent) 92 B/180 B
Quantum resistance: false
Networks: -
Latency: 151.84853ms
- Case A and Case C - both P2P Mode
- Case C is the same local network (expected) -
192.168.1.145:51820/192.168.1.142:51820 - Case A is the different network (expected) -
143.177.126.54:40808/192.168.1.142:51820 - Case B - like case A, but between host and home ISP we have my MikroTik RB4011
Because of that I hit "Relay mode" because of My MikroTik configuration - NAT traveral does not work Because of that I am not able to benchmark RB4011 performance, because in my benchmark I hit limit from Relay Server, not MikroTik! I has around 100 Mbps and Mikrotik load is about 70% CPU When I connect in P2P mode I achieve significantly higher speed (like 200Mpbs with load of server around 25%) - but it does not matter
So, if we solve "how to confiure MikroTik RB4011 to have workable NAT traversal for Netbird" then I will be able to benchmark the actual performance of Netbird client launched in container on MikroTik RB4011 router
Actually, people highlight me the fact, that in case B I have two NAP
- one NAT from MikroTik
- another NAT from Home ISP Is it possible to have NAT traversal for NetBird in this case? How to configure MikroTik for that?
Actually, people highlight me the fact, that in case B I have two NAP
- one NAT from MikroTik
- another NAT from Home ISP Is it possible to have NAT traversal for NetBird in this case? How to configure MikroTik for that?
Even three NATs
- NAT from Odido ISP
- NAT from Oido router (connected by Fiber Optic to home ISP)
- NAT from MiktoTik
NetBird sucessfully performed NAT traversal for two
- NAT from Odido ISP
- NAT from Oido router (connected by Fiber Optic to home ISP)
NetBird not able to perform NAT traversal is case of three
- NAT from Odido ISP
- NAT from Oido router (connected by Fiber Optic to home ISP)
- NAT from MiktoTik
So, we need to fix NAT traversal on MikroTik side somehow and I will able to benchmark NetBird on container on MikroTik
I manage to get P2P connection on MikroTik container with NetBird 🎉 Benchmarks are coming
I manage to get P2P connection on MikroTik container with NetBird 🎉 Benchmarks are coming
How did you achieve it?
I manage to get P2P connection on MikroTik container with NetBird 🎉 Benchmarks are coming
How did you achieve it?
On my Odido router I have
- DHCP static lease
- DMZ
I configured Odido router
- MAC of port "ether1" of MikroTik as static lease 192.168.1.11
- change Odido DHCP server range to 192.168.1.32-192.168.1.254 to avoid conflict with 192.168.1.11
- put IP address 192.168.1.11 to DMZ
By this move I achieved the following - by default traffic is going to my MikroTik ether1, i.e. like MikroTik connected to the internet directly Home network use Odidio Router NAT, and in this case DMZ Ignored
On MikroTik side
- configure netbird container (in general very close to NetBird article about MikroTik) - the single exception - I changed wireguard port (NB_WIREGUARD_PORT=51830)
- configure additional firewall rule
/ip/firewall/nat/add chain=dstnat protocol=udp dst-port=51830 action=dst-nat to-addresses=172.17.0.2 to-ports=51830 in-interface=ether1 comment="NetBird DST-NAT (mikrotik container)"
Combination of this custom port 51830 exception in firewall + ether1 in DMZ makes the difference
/container/envs/add key=NB_SETUP_KEY name=netbird value=<setup key>
/container/envs/add key=NB_NAME name=netbird value=netbird-rb4011
/container/envs/add key=NB_HOSTNAME name=netbird value=netbird-rb4011
/container/envs/add key=NB_EXTRA_DNS_LABELS name=netbird value=netbird.rb4011.nl-haa.self
/container/envs/add key=NB_LOG_LEVEL name=netbird value=info
/container/envs/add key=NB_DISABLE_CUSTOM_ROUTING name=netbird value=true
/container/envs/add key=NB_USE_LEGACY_ROUTING name=netbird value=true
/container/envs/add key=NB_WIREGUARD_PORT name=netbird value=51830
netbird-rb4011.vpn.truvity.internal:
NetBird IP: 100.97.168.2
Public key: hgT7dDr3wuATvbF6PK9DT+Ki0wO4RN28a4GxAn4baWE=
Status: Connected
-- detail --
Connection type: P2P
ICE candidate (Local/Remote): srflx/srflx
ICE candidate endpoints (Local/Remote): 143.177.126.54:51820/143.177.126.54:51830
Relay server address: rels://streamline-de-fra1-2.relay.netbird.io:443
Last connection update: 27 seconds ago
Last WireGuard handshake: 28 seconds ago
Transfer status (received/sent) 156 B/180 B
Quantum resistance: false
Networks: -
Latency: 5.541051ms
[admin@rb4011] > /container/print
0 name="netbird" repo="registry-1.docker.io/netbirdio/netbird:latest" os="linux" arch="arm" interface=netbird envlist="netbird" root-dir=tmp1/netbird-root mounts=netbird-config-mount dns=8.8.8.8 hostname="netbird-rb4011" workdir="/"
logging=yes start-on-boot=yes status=running
[admin@rb4011] >
@braginini please ask questions if something unclear.
Right now I am refactoring MikroTik configuration to "target state", before I will redeploy it and test together full set up and compare WireGuard client from MikroTik vs NetBird
@braginini additional update :)
It is normal to have NAT + masquerade for your WAN
If you have "normal" configuration with global NAT + masquerade to outbound WAN (let's say to ether1) AND at the same time "docker bridge nat"
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.17.0.0/24
like you recommend in your manual, that we will have "double-nat" and P2P will not work (lost two hours during redeploy to figure out what's wrong)
@braginini @netbirddev I made benchmark
Set up
- Scaleway Elastic Metal Server
netbird-benchmark- Ubuntu 24.04 (wireguard server) - MikroTik RB4011 - RouterOS v7 (wireguard client)
- Host
nix-darwin-x86-64- inside internal MikroTik network
To test I was using iperf3
On server side - iperf3 -s
On client side - iperf3 -c <IP address of server> -t 60 -P 8
As the result I utlize as much as possible WireGuard Connection
| Case | Where is client? | Where is server? | Wireguard? | Speed | MikroTik CPU |
|---|---|---|---|---|---|
| A | cloud | local | mikrotik native | 460Mbps | 46% |
| B | local | cloud | mikrotik native | 618Mbps | 59% |
| C | cloud | local | netbird | 110 Mbps | 78% |
| D | local | cloud | netbird | 102 Mbps | 78% |
For case C and D I verified that Scale <=> MikroTik connected by P2P For case C and D MikroTik was used as a routing peer, and I have configured it to route traffic to/from NetBird container
netbird-rb4011.vpn.truvity.internal:
NetBird IP: 100.97.244.154
Public key: hgT7dDr3wuATvbF6PK9DT+Ki0wO4RN28a4GxAn4baWE=
Status: Connected
-- detail --
Connection type: P2P
ICE candidate (Local/Remote): srflx/prflx
ICE candidate endpoints (Local/Remote): 51.158.204.10:51820/143.177.126.54:51830
Relay server address: rels://streamline-de-fra1-1.relay.netbird.io:443
Last connection update: 23 seconds ago
Last WireGuard handshake: 23 seconds ago
Transfer status (received/sent) 668 B/2.3 KiB
Quantum resistance: false
Networks: 10.32.0.0/16
Latency: 6.259129ms
Case A
[SUM] 0.00-60.01 sec 3.21 GBytes 460 Mbits/sec receiver
[admin@rb4011] > /system/resource/monitor
cpu-used: 46%
cpu-used-per-cpu: 41%
30%
33%
81%
free-memory: 898424KiB
Case B
[SUM] 0.00-60.01 sec 4.32 GBytes 618 Mbits/sec receiver
[admin@rb4011] > /system/resource/monitor
cpu-used: 59%
cpu-used-per-cpu: 51%
47%
43%
98%
free-memory: 898572KiB
Case C
[SUM] 0.00-60.04 sec 788 MBytes 110 Mbits/sec receiver
cpu-used: 78%
cpu-used-per-cpu: 74%
81%
84%
74%
free-memory: 845516KiB
Case D
[SUM] 0.00-60.14 sec 729 MBytes 102 Mbits/sec receiver
[admin@rb4011] [resou]> /system/resource/monitor
cpu-used: 78%
cpu-used-per-cpu: 85%
77%
76%
77%
free-memory: 806964KiB
@nazarewk @braginini so, guys, I tend to think, that option (B) is ideal, at least I will be able to put ssh + scripts to call RouterOS CLI to configure wireguard server/client based on configuration provided by NetBird application (dynamic one)
@nazarewk @braginini @netbirddev
In general this speed limitation not even a problem. Not effective, but in general 100 Mbps for my case is enough. But THIS IS - a HUGE PROBLEM
➜ ping -c 60 darwin-x86-64.default.nl-haa.self.int.truvity.internal
PING darwin-x86-64.default.nl-haa.self.int.truvity.internal (10.32.1.253) 56(84) bytes of data.
64 bytes from 10.32.1.253: icmp_seq=1 ttl=64 time=5.09 ms
64 bytes from 10.32.1.253: icmp_seq=2 ttl=64 time=6.64 ms
64 bytes from 10.32.1.253: icmp_seq=3 ttl=64 time=5.11 ms
64 bytes from 10.32.1.253: icmp_seq=4 ttl=64 time=5.90 ms
64 bytes from 10.32.1.253: icmp_seq=5 ttl=64 time=6.94 ms
64 bytes from 10.32.1.253: icmp_seq=6 ttl=64 time=6.77 ms
64 bytes from 10.32.1.253: icmp_seq=7 ttl=64 time=6.39 ms
64 bytes from 10.32.1.253: icmp_seq=8 ttl=64 time=15.8 ms
64 bytes from 10.32.1.253: icmp_seq=9 ttl=64 time=27.9 ms
64 bytes from 10.32.1.253: icmp_seq=10 ttl=64 time=81.4 ms
64 bytes from 10.32.1.253: icmp_seq=11 ttl=64 time=100 ms
64 bytes from 10.32.1.253: icmp_seq=12 ttl=64 time=98.9 ms
64 bytes from 10.32.1.253: icmp_seq=13 ttl=64 time=92.3 ms
64 bytes from 10.32.1.253: icmp_seq=14 ttl=64 time=97.4 ms
64 bytes from 10.32.1.253: icmp_seq=15 ttl=64 time=97.8 ms
64 bytes from 10.32.1.253: icmp_seq=16 ttl=64 time=92.8 ms
64 bytes from 10.32.1.253: icmp_seq=17 ttl=64 time=104 ms
64 bytes from 10.32.1.253: icmp_seq=18 ttl=64 time=92.2 ms
64 bytes from 10.32.1.253: icmp_seq=19 ttl=64 time=82.2 ms
64 bytes from 10.32.1.253: icmp_seq=20 ttl=64 time=112 ms
64 bytes from 10.32.1.253: icmp_seq=21 ttl=64 time=133 ms
64 bytes from 10.32.1.253: icmp_seq=22 ttl=64 time=90.7 ms
64 bytes from 10.32.1.253: icmp_seq=23 ttl=64 time=97.2 ms
64 bytes from 10.32.1.253: icmp_seq=24 ttl=64 time=101 ms
64 bytes from 10.32.1.253: icmp_seq=25 ttl=64 time=105 ms
64 bytes from 10.32.1.253: icmp_seq=26 ttl=64 time=99.9 ms
64 bytes from 10.32.1.253: icmp_seq=27 ttl=64 time=98.9 ms
64 bytes from 10.32.1.253: icmp_seq=28 ttl=64 time=93.3 ms
64 bytes from 10.32.1.253: icmp_seq=29 ttl=64 time=119 ms
64 bytes from 10.32.1.253: icmp_seq=30 ttl=64 time=91.6 ms
64 bytes from 10.32.1.253: icmp_seq=31 ttl=64 time=106 ms
64 bytes from 10.32.1.253: icmp_seq=32 ttl=64 time=102 ms
64 bytes from 10.32.1.253: icmp_seq=33 ttl=64 time=99.8 ms
64 bytes from 10.32.1.253: icmp_seq=34 ttl=64 time=103 ms
64 bytes from 10.32.1.253: icmp_seq=35 ttl=64 time=106 ms
64 bytes from 10.32.1.253: icmp_seq=36 ttl=64 time=111 ms
64 bytes from 10.32.1.253: icmp_seq=37 ttl=64 time=104 ms
64 bytes from 10.32.1.253: icmp_seq=38 ttl=64 time=90.2 ms
64 bytes from 10.32.1.253: icmp_seq=39 ttl=64 time=89.8 ms
64 bytes from 10.32.1.253: icmp_seq=40 ttl=64 time=8.04 ms
64 bytes from 10.32.1.253: icmp_seq=41 ttl=64 time=8.20 ms
64 bytes from 10.32.1.253: icmp_seq=42 ttl=64 time=7.79 ms
64 bytes from 10.32.1.253: icmp_seq=43 ttl=64 time=6.44 ms
64 bytes from 10.32.1.253: icmp_seq=44 ttl=64 time=5.61 ms
64 bytes from 10.32.1.253: icmp_seq=45 ttl=64 time=7.11 ms
64 bytes from 10.32.1.253: icmp_seq=46 ttl=64 time=5.06 ms
64 bytes from 10.32.1.253: icmp_seq=47 ttl=64 time=5.83 ms
During benchmark latency significantly increased. With native mikrotik wireguard ping is stable, even under huge load
@braginini @nazarewk @netbirddev
Let me actually summarize what I want to achieve and why in this way
ISO 27001
- we are ISO 27001 certified organization
- certian part of ISO requires flow log capture, segregation of control, etc
- netbird provide strong capabilities which simplify implementation of ISO 27001
- I prefer to manage entire VPN for organization by NetBird
We need NetBird for two major tasks
- VPN peering between several clouds providers (AWS, Scaleway) and "home-lab" self-hosted build farm
- VPN access for employee (in the light of segregation of control) + OOB (out-of-the-band)
The ideal set up to me
- I launch netbird as docker container on MikroTik RB 4011 router - role "Routing Peer"
- I launch netbird on "internet gateway" machines on cloud side
- Netbird organizes VPN mesh between gateways in multiple clouds (P2P between all of them)
- NetBird provides OOB and direct access for engineers, devops, admins.
This scheme will not work, when we have these issues with netbird (depsite the fact of poor performance, latency is a killer)
WIthout launching NetBird on RB 4011 I have two options A. Launch NetBird Routing Peer inside self-hosted network, behind MikroTik. Complex routing on ubuntu side, two SPOF (single-point of failures) B. Use native WireGuard client on MikroTik side to organize mesh for peering (manually! ansible!) and use NetBird only as OOB and people access (even more complicated routing!)
The current GitHub tasks - the goal is simple
- launch netbird as container on MikroTik RB4011
- somehow connect MikroTik CLI or API to netbird, to let netbird directly handle any VPN Mesh connection between MikroTik (directly operate RouterOS
/interface/wireguard+/interface/wireguard/peers), other clouds, people
This is will be ideal solution to me! (the most important part - log capture/aggregation, SSO on NetBird side, segregation of control on NetBird side. Any other solution force to manually aggregate flow logs + document all the network for ISO)
@braginini @nazarewk @netbirddev
hi guys!
Do you have any updates or comments? Why I am asking - I need to understand the course of actions. In the light of ISO 27001 I am really prefer to use NetBird features
- Device controls with MDM & EDR integrations
- Connection traffic events logging
- Audit & traffic events streaming
and perform peering between networks / VPN by NetBird only.
Because of that, I need to understand what is better
- use some special internal peer inside LAN (Ubuntu machine)
- fix MikroTik container to work good enough
@braginini @nazarewk @netbirddev Hello! Do you have any updates/advises to me?
@excavador you can tag me on this issue.
I will check this and get back to you tomorrow.
@excavador you can tag me on this issue.
I will check this and get back to you tomorrow.
If you need any help or clarification, then I am for your service!
@excavador you can tag me on this issue.
I will check this and get back to you tomorrow.
Hello! Any updates?
@braginini @nazarewk @netbirddev @mlsmaycon
I have additional insights! I think, I know exactly, why we have significant performance degradation of netbird container!
The problem is "veth" interface - mikrotik not able to use "hardware offload" for this interface
What I have
- bridge with vlan filtering
- vlan for netbird network
- two hosts inside netbird's vlan - one Raspberry Pi 5 host (with Netbird), another one container with netbird on MikroTik side
Even if I am using Raspberry Pi 5 as "routing peer" I have only 100 Mbps! MikroTik CPU usage around 25% If I (a) stop netbird container (b) remove "veth" interface from bridge, then I have 1Gbps!
So, the primary reason of performance degradation of NetBird on MikroTik side - lack of hardware acceleration for veth! (hardware offloading) In my particular case I have even worse side-effects - degradation of performance of entire bridge (processed by CPU instead of switch hardward), but for MikroTik container itself the same problem
If netbird container will perform only MANAGEMENT operation for mikrotik native wireguard client - then everything on mikrotik side will use native hardware acceleration.
Without that, netbird container which actuall HANDLE the traffic will suffer from poor performance BECAUSE OF LACK OF hardware offload for veth!
@mlsmaycon any updates?