iroko
iroko copied to clipboard
A platform to test reinforcement learning policies in the datacenter setting.
Hi, I'm currently studying your code and I found your control pipeline working as the flow chart below. The node controller will start during the initalization of the environment and...
Hi, Your works are pretty clever! However, I met some problems when trying to observe the result folder. The host server.out&err /client.out&err /ctrl.out are all empty. And for host ctrl.err...
Hi: Iroko works well but when I try to plot RTT by If transport == "tcp": analyze_pcap(rl_algos, tcp_algos, plt_name, runs, data_dir) plot_barchart(algos, plt_stats, plt_name) in plot.py. But it seems no...
Is it possible to replace mininet default switch (ovs switch) with some custom switch? For example can we, collect the metrics from the switch and feed to the AI based...
- [x] DDPG - [x] PPO - [x] REINFORCE - [ ] Linear Policy Iteration - [x] A3C - [x] APE-X - [x] IMPALA - [x] TD3 - [ ]...
The framework looks very interesting and can probably be very helpful. However, I am a little confused in terms of how to start with this project. Can you give some...
An example of useful statistics can be found here: https://github.com/flowgrind/flowgrind
Add a baseline NUM solver and a random agent to compare their performance against the trained models.
Instead of sampling hosts from centralized switches, hosts should notify the arbiter of intention to send a flow to a location. Path can be inferred by the arbiter. This is...
OVS is slow, cumbersome, and overengineered for our purposes. A nice alternative would be a [VPP](https://wiki.fd.io/view/VPP/What_is_VPP%3F) or [XDP](https://www.iovisor.org/technology/xdp) based switching framework which is much more lightweight and flexible. The hope...