POC WireGuard + FRR Setup a.k.a dodgy meshy test network
It’s hackweek at Suse! Probably one of my favourite times of year, though I think they come up every 9 months or so.
Anyway, this hackweek I’ve been on a WireGuard journey. I started reading the paper and all the docs. Briefly looking into the code, sitting in the IRC channel and joining the mailing list to get a feel for the community.
There is still 1 day left of hackweek, so I hope to spend more time in the code, and maybe, just maybe see if I can fix a bug.. although they don’t seem to have tracker like most projects, so let’s see how that goes.
The community seems pretty cool. The tech, frankly pretty amazing, even I, from a cloud storage background, understood most the paper.
I had set up a tunnel, tcpdumped traffic, used wireshark to look closely at the packets as I read the paper, it was very informative. But I really wanted to get a feel for how this tech could work. They do have a wg-dynamic project which is planning on use wg as a building block to do cooler things, like mesh networking. This sounds cool, so I wanted to sync my teeth in and see how, not wg-dynamic, but see if I could build something similar out of existing OSS tech, and see where the gotchas are, outside of the obviously less secure. It seemed like a good way to better understand the technology.
So on Wednesday, I decided to do just that. Today is Thursday and I’ve gotten to a point where I can say I partially succeeded. And before I delve in deeper and try and figure out my current stumbling block, I thought I’d write down where I am.. and how I got here.. to:
- Point the wireguard community at, in case they’re interested.
- So you all can follow along at home, because it’s pretty interesting, I think.
As this title suggests, the plan is/was to setup a bunch of tunnels and use FRR to set up some routing protocols up to talk via these tunnels, auto-magically ๐
UPDATE: The problem I describe in this post, routes becoming stale, only seems to happen when using RIPv2. When I change it to OSPFv2 all the routes work as expected!! Will write a follow up post to explain the differences.. in fact may rework the notes for it too ๐
The problem at hand
A picture is worth 1000 words. The basic idea is to simulate a bunch of machines and networks connected over wireguard (WG) tunnels. So I created 6 vms, connected as you can see above.
I used Chris Smart’s ansible-virt-infra project, which is pretty awesome, to build up the VMs and networks as you see above. I’ll leave my build notes as an appendix to this post.
Once I have the infrastructure setup, I build all the tunnels as they are in the image. Then went ahead and installed FRR on all the nodes with tunnels (nodes 1, 2, 4, and 5). To keep things simple, I started with the easiest to configure routing protocol, RIPv2.
Believe it or not, everything seemed to work.. well mostly. I can jump on say node 5 (wireguard-5 if you playing along at home) and:
suse@wireguard-5:~> ip r
default via 172.16.0.1 dev eth0 proto dhcp
10.0.2.0/24 via 10.0.4.104 dev wg0 proto 189 metric 20
10.0.3.0/24 via 10.0.4.104 dev wg0 proto 189 metric 20
10.0.4.0/24 dev wg0 proto kernel scope link src 10.0.4.105
172.16.0.0/24 dev eth0 proto kernel scope link src 172.16.0.36
172.16.2.0/24 via 10.0.4.104 dev wg0 proto 189 metric 20
172.16.3.0/24 via 10.0.4.104 dev wg0 proto 189 metric 20
172.16.4.0/24 dev eth1 proto kernel scope link src 172.16.4.105
172.16.5.0/24 dev eth2 proto kernel scope link src 172.16.5.105
Looks good right, we see routes for networks 172.16.{0,2,3,4,5}.0/24. Network 1 isn’t there, but hey that’s quite far away, maybe it hasn’t made it yet. Which leads to the real issue.
If I go and run ip r
again, soon all these routes will become stale and disappear. Running ip -ts monitor
shows just that.
So the question is, what’s happening to the RIP advertisements? And yes they’re still being sent. Then how come some made it to node 5, and never again.
The simple answer is, it was me. The long answer is, I’ve never used FRR before, and it just didn’t seem to be working. So I started debugging the env. To debug, I had a tmux session opened on the KVM host with a tab for each node running FRR. I’d go to each tab and run tcpdump to check to see if the RIP traffic was making it through the tunnel. And almost instantly, I saw traffic, like:
suse@wireguard-5:~> sudo tcpdump -v -U -i wg0 port 520
tcpdump: listening on wg0, link-type RAW (Raw IP), capture size 262144 bytes
03:01:00.006408 IP (tos 0xc0, ttl 64, id 62964, offset 0, flags [DF], proto UDP (17), length 52)
10.0.4.105.router > 10.0.4.255.router:
RIPv2, Request, length: 24, routes: 1 or less
AFI 0, 0.0.0.0/0 , tag 0x0000, metric: 16, next-hop: self
03:01:00.007005 IP (tos 0xc0, ttl 64, id 41698, offset 0, flags [DF], proto UDP (17), length 172)
10.0.4.104.router > 10.0.4.105.router:
RIPv2, Response, length: 144, routes: 7 or less
AFI IPv4, 0.0.0.0/0 , tag 0x0000, metric: 1, next-hop: self
AFI IPv4, 10.0.2.0/24, tag 0x0000, metric: 2, next-hop: self
AFI IPv4, 10.0.3.0/24, tag 0x0000, metric: 1, next-hop: self
AFI IPv4, 172.16.0.0/24, tag 0x0000, metric: 1, next-hop: self
AFI IPv4, 172.16.2.0/24, tag 0x0000, metric: 2, next-hop: self
AFI IPv4, 172.16.3.0/24, tag 0x0000, metric: 1, next-hop: self
AFI IPv4, 172.16.4.0/24, tag 0x0000, metric: 1, next-hop: self
At first I thought it was good timing. I jumped to another host, and when I tcpdumed the RIP packets turned up instantaneously. This happened again and again.. and yes it took me longer then I’d like to admit before it dawned on me.
Why are routes going stale? it seems as though the packets are getting queued/stuck in the WG interface until I poked it with tcpdump!
These RIPv2 Request packet is sent as a broadcast, not directly to the other end of the tunnel. To get it to not be dropped, I had to widen my WG peer allowed-ips
from the /32 to a /24.
So now I wonder if broadcast, or just the fact that it’s only 52 bytes, means it gets queued up and not sent through the tunnel, that is until I come along with a hammer and tcpdump the interface?
Maybe one way I could test this is to speed up the RIP broadcasts and hopefully fill a buffer, or see if I can turn WG, or rather the kernel, into debugging mode.
Build notes
As Promised, here are the current form of my build notes, make reference to the topology image I used above.
BTW I’m using OpenSuse Leap 15.1 for all the nodes.
Build the env
I used ansible-virt-infra created by csmart to build the env. A created my own inventory file, which you can dump in the inventory/ folder which I called wireguard.yml:
---
wireguard:
hosts:
wireguard-1:
virt_infra_networks:
- name: "net-mgmt"
- name: "net-blue"
- name: "net-green"
wireguard-2:
virt_infra_networks:
- name: "net-mgmt"
- name: "net-blue"
- name: "net-white"
wireguard-3:
virt_infra_networks:
- name: "net-mgmt"
- name: "net-white"
wireguard-4:
virt_infra_networks:
- name: "net-mgmt"
- name: "net-orange"
- name: "net-green"
wireguard-5:
virt_infra_networks:
- name: "net-mgmt"
- name: "net-orange"
- name: "net-yellow"
wireguard-6:
virt_infra_networks:
- name: "net-mgmt"
- name: "net-yellow"
vars:
virt_infra_distro: opensuse
virt_infra_distro_image: openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-OpenStack-Cloud-Current.qcow2
virt_infra_distro_image_url: https://download.opensuse.org/distribution/leap/15.1/jeos/openSUSE-Leap-15.1-JeOS.x86_64-15.1.0-OpenStack-Cloud-Current.qcow2
virt_infra_variant: opensuse15.1
Next we need to make sure the networks have been defined, we do this in the kvmhost inventory file, here’s a diff:
diff --git a/inventory/kvmhost.yml b/inventory/kvmhost.yml index b1f029e..6d2485b 100644 --- a/inventory/kvmhost.yml +++ b/inventory/kvmhost.yml @@ -40,6 +40,36 @@ kvmhost: subnet: "255.255.255.0" dhcp_start: "10.255.255.2" dhcp_end: "10.255.255.254" + - name: "net-mgmt" + ip_address: "172.16.0.1" + subnet: "255.255.255.0" + dhcp_start: "172.16.0.2" + dhcp_end: "172.16.0.99" + - name: "net-white" + ip_address: "172.16.1.1" + subnet: "255.255.255.0" + dhcp_start: "172.16.1.2" + dhcp_end: "172.16.1.99" + - name: "net-blue" + ip_address: "172.16.2.1" + subnet: "255.255.255.0" + dhcp_start: "172.16.2.2" + dhcp_end: "172.16.2.99" + - name: "net-green" + ip_address: "172.16.3.1" + subnet: "255.255.255.0" + dhcp_start: "172.16.3.2" + dhcp_end: "172.16.3.99" + - name: "net-orange" + ip_address: "172.16.4.1" + subnet: "255.255.255.0" + dhcp_start: "172.16.4.2" + dhcp_end: "172.16.4.99" + - name: "net-yellow" + ip_address: "172.16.5.1" + subnet: "255.255.255.0" + dhcp_start: "172.16.5.2" + dhcp_end: "172.16.5.99" virt_infra_host_deps: - qemu-img - osinfo-query
Now all we need to do is run the playbook:
ansible-playbook --limit kvmhost,wireguard ./virt-infra.yml
Setting up the IPs and tunnels
This above infrastructure tool uses cloud_init to set up the network, so only the first NIC is up. You can confirm this with:
ansible wireguard -m shell -a "sudo ip a"
That’s ok because we want to use the numbers on our diagram anyway ๐
Before we get to that, lets make sure wireguard is setup, and update all the nodes.
ansible wireguard -m shell -a "sudo zypper update -y"
If a reboot is required, reboot the nodes:
ansible wireguard -m shell -a "sudo reboot"
Add the wireguard repo to the nodes and install it, I look forward to 5.6 where wireguard will be included in the kernel:
ansible wireguard -m shell -a "sudo zypper addrepo -f obs://network:vpn:wireguard wireguard"
ansible wireguard -m shell -a "sudo zypper --gpg-auto-import-keys install -y wireguard-kmp-default wireguard-tools"
Load the kernel module:
ansible wireguard -m shell -a "sudo modprobe wireguard"
Let’s create wg0 on all wireguard nodes:
ansible wireguard-1,wireguard-2,wireguard-4,wireguard-5 -m shell -a "sudo ip link add dev wg0 type wireguard"
And add wg1 to those nodes that have 2:
ansible wireguard-1,wireguard-4 -m shell -a "sudo ip link add dev wg1 type wireguard"
Now while we’re at it, lets create all the wireguard keys (because we can use ansible):
ansible wireguard-1,wireguard-2,wireguard-4,wireguard-5 -m shell -a "sudo mkdir -p /etc/wireguard"
ansible wireguard-1,wireguard-2,wireguard-4,wireguard-5 -m shell -a "wg genkey | sudo tee /etc/wireguard/wg0-privatekey | wg pubkey | sudo tee /etc/wireguard/wg0-publickey"
ansible wireguard-1,wireguard-4 -m shell -a "wg genkey | sudo tee /etc/wireguard/wg1-privatekey | wg pubkey | sudo tee /etc/wireguard/wg1-publickey"
Let’s make sure we enable forwarding on the nodes the will pass traffic, and install the routing software (1,2,4 and 5):
ansible wireguard-1,wireguard-2,wireguard-4,wireguard-5 -m shell -a "sudo sysctl net.ipv4.conf.all.forwarding=1"
ansible wireguard-1,wireguard-2,wireguard-4,wireguard-5 -m shell -a "sudo sysctl net.ipv6.conf.all.forwarding=1"
While we’re at it, we might as well add the network repo so we can install FRR and then install it on the nodes:
ansible wireguard-1,wireguard-2,wireguard-4,wireguard-5 -m shell -a "sudo zypper ar https://download.opensuse.org/repositories/network/openSUSE_Leap_15.1/ network"
ansible wireguard-1,wireguard-2,wireguard-4,wireguard-5 -m shell -a "sudo zypper --gpg-auto-import-keys install -y frr libyang-extentions"
We’ll be using RIPv2, as we’re just using IPv4:
ansible wireguard-1,wireguard-2,wireguard-4,wireguard-5 -m shell -a "sudo sed -i 's/^ripd=no/ripd=yes/' /etc/frr/daemons"
And with that now we just need to do all per server things like add IPs and configure all the keys, peers, etc. We’ll do this a host at a time.
NOTE: As this is a POC we’re just using ip
commands, obviously in a real env you’d wont to use systemd-networkd or something to make these stick.
wireguard-1
Firstly using:
sudo virsh dumpxml wireguard-1 |less
We can see that eth1 is net-blue and eth2 is net-green so:ssh wireguard-1
First IPs:sudo ip address add dev eth1 172.16.2.101/24
sudo ip address add dev eth2 172.16.3.101/24
sudo ip address add dev wg0 10.0.2.101/24
sudo ip address add dev wg1 10.0.3.101/24
Load up the tunnels:sudo wg set wg0 listen-port 51821 private-key /etc/wireguard/wg0-privatekey
# Node2 (2.102) public key is: P1tHKnaw7d2GJUSwXZfcayrrLMaCBHqcHsaM3eITm0s= (cat /etc/wireguard/wg0-publickey)
sudo wg set wg0 peer P1tHKnaw7d2GJUSwXZfcayrrLMaCBHqcHsaM3eITm0s= allowed-ips 10.0.2.0/24 endpoint 172.16.2.102:51822
sudo ip link set wg0 up
sudo wg set wg1 listen-port 51831 private-key /etc/wireguard/wg1-privatekey
# Node4 (3.104) public key is: GzY59HlXkCkfXl9uSkEFTHzOtBsxQFKu3KWGFH5P9Qc= (cat /etc/wireguard/wg1-publickey)
sudo wg set wg1 peer GzY59HlXkCkfXl9uSkEFTHzOtBsxQFKu3KWGFH5P9Qc= allowed-ips 10.0.3.0/24 endpoint 172.16.3.104:51834
sudo ip link set wg1 up
Setup FRR:sudo tee /etc/frr/frr.conf <<EOF
hostname $(hostname)password frr
enable password frr
log file /var/log/frr/frr.log
router rip
version 2
redistribute kernel
redistribute connected
network wg0
no passive-interface wg0
network wg1
no passive-interface wg1
EOF
sudo systemctl restart frr
wireguard-2
Firstly using:
sudo virsh dumpxml wireguard-2 |less
We can see that eth1 is net-blue and eth2 is net-white so:
ssh wireguard-2
First IPs:sudo ip address add dev eth1 172.16.2.102/24
sudo ip address add dev eth2 172.16.1.102/24
sudo ip address add dev wg0 10.0.2.102/24
Load up the tunnels:sudo wg set wg0 listen-port 51822 private-key /etc/wireguard/wg0-privatekey
# Node1 (2.101) public key is: ZsHAeRbNsK66MBOwDJhdDgJRl0bPFB4WVRX67vAV7zs= (cat /etc/wireguard/wg0-publickey)
sudo wg set wg0 peer ZsHAeRbNsK66MBOwDJhdDgJRl0bPFB4WVRX67vAV7zs= allowed-ips 10.0.2.0/24 endpoint 172.16.2.101:51821
sudo ip link set wg0 up
Setup FRR:sudo tee /etc/frr/frr.conf <<EOF
hostname $(hostname)
password frr
enable password frr
log file /var/log/frr/frr.log
router rip
version 2
redistribute kernel
redistribute connected
network wg0
no passive-interface wg0
EOF
sudo systemctl restart frr
wireguard-3
Only has a net-white, so it must be eth1 so:
ssh wireguard-3
First IPs:
sudo ip address add dev eth1 172.16.1.103/24
Has no WG tunnels or FRR so we’re done here.
wireguard-4
Firstly using:
sudo virsh dumpxml wireguard-4 |less
We can see that eth1 is net-orange and eth2 is net-green so:
ssh wireguard-4
First IPs:sudo ip address add dev eth1 172.16.4.104/24
sudo ip address add dev eth2 172.16.3.104/24
sudo ip address add dev wg0 10.0.4.104/24
sudo ip address add dev wg1 10.0.3.104/24
Load up the tunnels:sudo wg set wg0 listen-port 51844 private-key /etc/wireguard/wg0-privatekey
# Node5 (4.105) public key is: Af/sIEnklG6nnDb0wzUSq1D/Ujh6TH+5R9TblLyS3h8= (cat /etc/wireguard/wg0-publickey)
sudo wg set wg0 peer Af/sIEnklG6nnDb0wzUSq1D/Ujh6TH+5R9TblLyS3h8= allowed-ips 10.0.4.0/24 endpoint 172.16.4.105:51845
sudo ip link set wg0 up
sudo wg set wg1 listen-port 51834 private-key /etc/wireguard/wg1-privatekey
# Node1 (3.101) public key is: Yh0kKjoqnJsxbCsTkQ/3uncEhdqa+EtJXCYcVzMdugs= (cat /etc/wireguard/wg1-publickey)
sudo wg set wg1 peer Yh0kKjoqnJsxbCsTkQ/3uncEhdqa+EtJXCYcVzMdugs= allowed-ips 10.0.3.0/24 endpoint 172.16.3.101:51831
sudo ip link set wg1 up
Setup FRR:sudo tee /etc/frr/frr.conf <<EOF
hostname $(hostname)
password frr
enable password frr
log file /var/log/frr/frr.log
router rip
version 2
redistribute kernel
redistribute connected
network wg0
no passive-interface wg0
network wg1
no passive-interface wg1
EOFsudo systemctl restart frr
wireguard-5
Firstly using:
sudo virsh dumpxml wireguard-5 |less
We can see that eth1 is net-orange and eth2 is net-yellow so:
ssh wireguard-5
First IPs”sudo ip address add dev eth1 172.16.4.105/24
sudo ip address add dev eth2 172.16.5.105/24
sudo ip address add dev wg0 10.0.4.105/24
Load up the tunnels:sudo wg set wg0 listen-port 51845 private-key /etc/wireguard/wg0-privatekey
# Node4 (4.104) public key is: aPA197sLN3F05bgePpeS2uZFlhRRLY8yVWnzBAUcD3A= (cat /etc/wireguard/wg0-publickey)
sudo wg set wg0 peer aPA197sLN3F05bgePpeS2uZFlhRRLY8yVWnzBAUcD3A= allowed-ips 10.0.4.0/24 endpoint 172.16.4.104:51844
sudo ip link set wg0 up
Setup FRR:sudo tee /etc/frr/frr.conf <<EOF
hostname $(hostname)
password frr
enable password frr
log file /var/log/frr/frr.log
router rip
version 2
redistribute kernel
redistribute connected
network wg0
no passive-interface wg0
EOFsudo systemctl restart frr
wireguard-6
Only has a net-yellow, so it must be eth1 so:
ssh wireguard-6
First IPs:sudo ip address add dev eth1 172.16.5.106/24
Final comments
When this _is_ all working, we’d probably need to open up the allowed-ips
on the WG tunnels. We could start by just adding 172.16.0.0/16
to the list. That might allow us to route packet to the other networks.
If you want to go find other routes out to the internet, then we may need 0.0.0.0/0
But not sure how WG will route that as it’s using the allowed-ips and public keys as a routing table. I guess it may not care as we only have a 1:1 mapping on each tunnel and if we can route to the WG interface, it’s pretty straight forward.
This is something I hope to test.
Anther really beneficial test would be to rebuild this environment using IPv6 and see if things work better as we wouldn’t have any broadcasts anymore, only uni and multi-cast.
As well as trying some other routing protocol in general, like OSPF.
Finally, having to continually adjust allowed-ips and seemingly have to either open it up more or add more ranges make me realise why the wg-dynamic
project exists, and why they want to come up with a secure routing protocol to use through the tunnels, to do something similar. So let’s keep an eye on that project.