VxWireguard-Generator
VxWireguard-Generator copied to clipboard
feature request - add cli option to vwgen to parse master config and copy/display Node names, Public IP and VxLAN TEP IPs
With only 2-10 Nodes this may not be a big deal but if you had 25-50+ it would be tedious to do this manually if you had to read through a very long "master" config file generated by vw-wireguard-generator.
What I think is needed is something like this:
$ vwgen vw-meshvpn list nodes
could output something like...
This is helpful in a number of ways.
In my own use-case I integrate Free Range Routing (FRR) and BGP with VxWireguard-Generator Node configs*.
Having the above would make it trivial to generate a Bash Script to populate an FRR "template" config file with correct number and IP addresses for the WireGuard and VxLAN configurations for each Node.
Bump
I apologize because I didn't see the notification of this issue. Otherwise I would have replied at least something.
Good idea for me.
But I decommissioned my own WG network for various reasons long ago. So I lost the motivation to add new features now. Would appreciate if someone can help to implement this feature.
@m13253 No problem..!
Just FYI, I built this:
https://github.com/bmullan/CIAB.Full-Mesh.VPN.Wireguard.FRR.BGP.VXLAN.Internet.Overlay.Architecture
Using FRR, BGP, Vx-wireguard, wireguard, LXD containers.
It worked great to create an Overlay network to interconnect LXD containers running on multiple Hosts on multiple Clouds.
Wow! Thats cool!
I had it working to Interconnect at Layer 2, LXD "system" containers on multiple Nodes on 2 Clouds (Hetzner & Digital Ocean) AND my Home Server.
LXD supports Debian, Ubuntu, Fedora, CentOS, Alpine etc so my LXD containers at home, Hetzner & Digital Ocean could be running anything and all were on the same "LAN" (private 10.X.X.X non-routable network).
Yet every LXD Container had its own separate Internet access through it's "Host" server's Internet connection.
Brian
On Thu, Jul 21, 2022, 12:52 AM Star Brilliant @.***> wrote:
Wow! Thats cool!
— Reply to this email directly, view it on GitHub https://github.com/m13253/VxWireguard-Generator/issues/10#issuecomment-1191036446, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAM23J6FLE5TNUUMQZDLQC3VVDJSFANCNFSM4OEVJPBQ . You are receiving this because you authored the thread.Message ID: @.***>
Hi...
FYI on my "comment" on the Wireguard sub-reddit https://www.reddit.com/r/WireGuard/ in response to this post asking:
Need help configuring multicast over WireGuard https://www.reddit.com/r/WireGuard/comments/15qpk46/need_help_configuring_multicast_over_wireguard/
I told the "thread" about how useful vx-wireguard-generator can be to help solve the multicast problem over Wireguard.
*Hopefully some new people will check it out ! *
I'm the retired ex-cisco guy you communicated with around July 2022 about *vx-wireguard-generator *and also my use of it with one of my projects... CIAB Full Mesh VPN Internet Overlay https://github.com/bmullan/CIAB.Full-Mesh.VPN.Wireguard.FRR.BGP.VXLAN.Internet.Overlay.Architecture
Hope you are doing well ! Any new linux networking projects?
Take care... Brian Raleigh, NC
On Thu, Jul 21, 2022 at 1:36 AM brian mullan @.***> wrote:
I had it working to Interconnect at Layer 2, LXD "system" containers on multiple Nodes on 2 Clouds (Hetzner & Digital Ocean) AND my Home Server.
LXD supports Debian, Ubuntu, Fedora, CentOS, Alpine etc so my LXD containers at home, Hetzner & Digital Ocean could be running anything and all were on the same "LAN" (private 10.X.X.X non-routable network).
Yet every LXD Container had its own separate Internet access through it's "Host" server's Internet connection.
Brian
On Thu, Jul 21, 2022, 12:52 AM Star Brilliant @.***> wrote:
Wow! Thats cool!
— Reply to this email directly, view it on GitHub https://github.com/m13253/VxWireguard-Generator/issues/10#issuecomment-1191036446, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAM23J6FLE5TNUUMQZDLQC3VVDJSFANCNFSM4OEVJPBQ . You are receiving this because you authored the thread.Message ID: @.***>