tractor
tractor copied to clipboard
Testing remote capabilities using virtual networks
This starts to address #124
pytest-vnet does most of the heavy work, creates a docker container from this docker image based on debian, it installs the latest mininet release from its github and sets up python build essentials, on the first run it will install python inside the container and save a snapshot for each python version it installs, to avoid repeating the process.
From mininet.org: "Mininet creates a realistic virtual network, running real kernel, switch and application code, on a single machine (VM, cloud or native)"
To use this virutal net, pytest-vnet
allows us to mark our regular test functions with the decorator @run_in_netvm
, here is a full example:
@run_in_netvm
def test_vsocket_hello():
s3 = vnet.addSwitch("s3")
@as_host(vnet, 'h1', '10.0.0.1', s3)
def receiver():
import socket
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind(('', 50007))
s.listen(1)
conn, addr = s.accept()
with conn:
data = conn.recv(1024)
assert data == b"Hello world through a virtual socket!"
@as_host(vnet, 'h2', '10.0.0.2', s3)
def sender():
import socket
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect(('10.0.0.1', 50007))
s.sendall("Hello world through a virtual socket!".encode('utf-8'))
vnet.start()
receiver.start_host()
sender.start_host()
receiver.proc.wait(timeout=3)
Test functions marked with @run_in_netvm
will be loaded as a script inside the running docker container, and the following code will be injected on them:
import sys
import logging
import traceback
"""
a bunch of sys.path.appends to hook the vms python env to the host
"""
from mininet.net import Mininet
from mininet.node import Controller
# To disable resource limit error message in mininet
from mininet.log import setLogLevel
setLogLevel("critical")
# Additional tools inside vm scripts
from pytest_vnet import as_script, as_host
vnet = Mininet(controller=Controller)
try:
vnet.addController('c0')
"""
actuall func code
"""
except Exception as e:
sys.stderr.write(traceback.format_exc())
vnet.stop()
As you can see by default a bunch of packages are imported and an empty virtual net is created: vnet
, this network gets automatically stopped at the end of the script. That try catch is to properly relay exceptions to pytest in the future, for now, the traceback format exec gets thrown into stderr
Then each function marked with @as_host
gets loaded to the netvm as a separate script but also it will create a new mininet
host and link to the network, here is the decorator code:
def as_host(vnet, hostname, addr, link):
def wrapper(func):
func = as_script(func) # only inject sys path appends and nothing else
func.host = vnet.addHost(hostname, ip=addr)
vnet.addLink(func.host, link)
def _start_proc():
func.proc = func.host.popen(["python3", func.path])
func.start_host = _start_proc
return func
return wrapper
To actually start the process one must call as_host_wrapped_func.start_host()
.
disclaimer: pytest-vnet
is in very early development
Oh we need to figure out how to run docker inside the travis vms
Oh we need to figure out how to run docker inside the travis vms
Yah, I've done it before in that other project I pointed to prior. Yaml line should be here.
Also mininet
has resiliency testing stuff built in, for example in mininet/examples/simpleperf.py:
class SingleSwitchTopo( Topo ):
"Single switch connected to n hosts."
def build( self, n=2, lossy=True ):
switch = self.addSwitch('s1')
for h in range(n):
# Each host gets 50%/n of system CPU
host = self.addHost('h%s' % (h + 1),
cpu=.5 / n)
if lossy:
# 10 Mbps, 5ms delay, 10% packet loss
self.addLink(host, switch,
bw=10, delay='5ms', loss=10, use_htb=True)
else:
# 10 Mbps, 5ms delay, no packet loss
self.addLink(host, switch,
bw=10, delay='5ms', loss=0, use_htb=True)
All the examples here are very useful.
@guilledk We dropped travisCI once the debugger stuff landed. Maybe rebase this and see if we can at least get a clean run?
@goodboy So whats the plan here, wanna give this another shot? I recently learned how to run docker containers inside github's CI
@guilledk I think if we're gonna do it let's use a real use case:
- spawn a streaming daemon try to connect and stream from it over a wireguard tunnel
- spawn rpc daemon and see if you can route requests over tor and back
Another tool that might be handy if/when we get back to this: https://manpages.ubuntu.com/manpages/trusty/man1/wirefilter.1.html