Priority Switching in V1model
I am very new to P4 Programming. I want to implement priority switching in exercise 1 (basics). I wanted to know, how should I proceed forward to implement this? I want to know what to change in v1model.p4 file and also what should I change in bmv2 ?
This comment on an earlier issue asking about how to enable this feature in simple_switch / simple_switch_grpc explains a one-line change in the p4lang/behavioral-model source code that is required, then it must be recompiled: https://github.com/p4lang/behavioral-model/issues/562#issuecomment-608769575
To run the P4 programs I have cloned virtual machine provided by https://github.com/p4lang/tutorials. And I am unable to find the file where I have to UNCOMMENT this lines. I know this might be a foolish question but I am very new to this environment.
If you have a VM that is set up as I think it might be, there should be two user accounts created on the VM. p4 is the name of the user account you normally use. vagrant is the name of a separate account where the source code of P4 open source tools was downloaded, compiled, and installed when the VM was created. If you log out from the p4 account, then log in using the vagrant account (password should be vagrant), then do cd behavioral-model, that is the version of the p4lang/behavioral-model code that was used to built simple_switch_grpc process on your system.
The instructions given at the link I gave earlier should mention the file name to be edited, using any text editor you prefer.
One way to rebuild the simple_switch_grpc process after making changes to its source code is to execute the following commands, starting from the ~/behavioral-model directory:
cd targets/simple_switch_grpc
make clean
./autogen.sh
# With debug enabled in binaries:
./configure --with-thrift 'CXXFLAGS=-O0 -g'
make
sudo make install
sudo ldconfig
There is also a bash script I publish here that does the above (plus optionally updating the behavioral-model code to the latest before recompiling, but I would recommend you NOT do that, since the VM you have may have fairly old behavioral-model code, but it matches with the version of p4c on your system). https://github.com/jafingerhut/p4-guide/blob/master/bin/build-behavioral-model-simple-switch-grpc.sh
I did that. But when I implemented priority queuing in "Basic" at https://github.com/p4lang/tutorials/tree/master/exercises/basic. I did not see any change in bandwidth for two streams. I took help from example from https://github.com/nsg-ethz/p4-learning/tree/master/examples/multiqueueing. My topology is like (h1 and h2) <-> S1 <-> S2 <-> (h3 and h4). I want to send two streams using iperf. One from h1 to h3 and another from h2 to h4. And give stream from h1 more preference. But I noticed that both the streams get some priority.
Here is the part of my basic.p4 code where I am assigning the priority.
_control MyIngress(inout headers hdr, inout metadata meta, inout standard_metadata_t standard_metadata) { action drop() { mark_to_drop(standard_metadata); }
action ipv4_forward(macAddr_t dstAddr, egressSpec_t port) {
standard_metadata.egress_spec = port;
hdr.ethernet.srcAddr = hdr.ethernet.dstAddr;
hdr.ethernet.dstAddr = dstAddr;
hdr.ipv4.ttl = hdr.ipv4.ttl - 1;
}
table ipv4_lpm {
key = {
hdr.ipv4.dstAddr: lpm;
}
actions = {
ipv4_forward;
drop;
NoAction;
}
size = 1024;
default_action = drop();
}
apply {
if (hdr.ipv4.isValid()) {
ipv4_lpm.apply();
if (hdr.ipv4.srcAddr == 0x0a000101){
standard_metadata.priority = (bit<3>)7;
}
else if (hdr.ipv4.srcAddr == 0x0a000202){
standard_metadata.priority = (bit<3>)0;
}
}
}
} _
I do not know for a fact if this is what is occurring in your system, but note that simple_switch_grpc and links created by mininet between those switches and virtual hosts, do NOT behave performance-wise like physical switches with constant-bit-rate Ethernet links between them. simple_switch_grpc can send at arbitrarily high packet and bit rates to the veth interfaces, as fast as the CPU cores on your system can process them, and then there are effectively software buffers "in the middle of the veth links", which real physical Ethernet links don't have.
You might be able to see better behavior if you limit the rates that simple_switch_grpc can send packets from each queue, but it can only be configured with a maximum packet rate, not a maximum bit rate as physical Ethernet links have. If most of your packets are the same size, that might be close enough. I don't know if you can set a maximum rate for all queues leading to the same Ethernet port, or only per queue, which might upset your priority-queue plus default-queue to the same port test: https://github.com/p4lang/behavioral-model/issues/311#issuecomment-286187619
I got the same issue here. I followed the steps that #562 given.
- I uncomment the
//#define SSWITCH_PRIORITY_QUEUEING_ONin simple_switch.h - I recompile the BMv2 with the following commands in tutorials/vm/user-bootstrap.sh
cd behavioral-model
./autogen.sh
# ./configure --enable-debugger --with-pi
make -j4
sudo make install
sudo ldconfig
# Simple_switch_grpc target
cd targets/simple_switch_grpc
./autogen.sh
./configure --with-thrift
make -j4
sudo make install
sudo ldconfig
- I use the tutorials/exercise/basic topology and program to test if it works, thus i add the following code in Ingress Control block in basic.p4:
if (hdr.ipv4.srcAddr == 0x0a000101){ standard_metadata.priority = (bit<3>)7; } - After
make runbuild the mininet , I typexterm h1 h2 h3 h3to open hosts - Then, i use iperf to let h3 listen on port 5001/5002, and h1 use iperf to send packet while h2 use iperf to send packets too
I expect to see h1 has higher priority than h2, which means link BW may be used mostly by h1's packets However the result is that h1 and h2 share the link BW
Did I misuse or got wrong in above steps? Perhaps the question is described clearly
If you send a single packet from the source IP address that you expect to execute the assignment in the if statement you show in your step #3, and look at the log of how the packet was processed, do you see that it is actually finding that condition true and executing the assignment? You can also create a 'debug table' that has standard_metadata.priority as a key, and the only action is NoAction, which in the simple_switch_grpc log will show the value of all key fields and you can see what value standard_metadata.priority has for a packet from the two different sources, to verify whether one is 7 and the other is 0 (if that is what you are trying to achieve). Explicitly initializing standard_metadata.priority to the default priority at the beginning of your program is a good safe programming practice, too.
Are you running iperf with TCP or UDP? TCP does slow start at the beginning, and only backs off if there is high latency or packet loss. If neither is happening in your setup, then both senders would likely be sending at the same rate the entire time.
When you say "h1 and h2 share the link BW" do you mean each gets half of the packets and/or bits sent across the link? Or some other ratio?
Also, in general realize that simple_switch_grpc queueing and sending rate to virtual Ethernet links does not work like a real switch ASIC sending to a constant bit rate link. Virtual ethernet interfaces in Linux kernels have their own internal FIFO queue of packets that I believe can queue up a fairly large number of packets, and receive as fast as the host CPU can enqueue them. You may want to use the set_queue_rate command to limit the rate that packets can be sent from the two queues to different numbers of packets/sec.
If you send a single packet from the source IP address that you expect to execute the assignment in the
ifstatement you show in your step #3, and look at the log of how the packet was processed, do you see that it is actually finding that condition true and executing the assignment? You can also create a 'debug table' that hasstandard_metadata.priorityas a key, and the only action is NoAction, which in the simple_switch_grpc log will show the value of all key fields and you can see what valuestandard_metadata.priorityhas for a packet from the two different sources, to verify whether one is 7 and the other is 0 (if that is what you are trying to achieve). Explicitly initializingstandard_metadata.priorityto the default priority at the beginning of your program is a good safe programming practice, too.Are you running iperf with TCP or UDP? TCP does slow start at the beginning, and only backs off if there is high latency or packet loss. If neither is happening in your setup, then both senders would likely be sending at the same rate the entire time.
When you say "h1 and h2 share the link BW" do you mean each gets half of the packets and/or bits sent across the link? Or some other ratio?
Thanks for your answering.
I checked the s1.log & s3.log , and i saw:
Condition "hdr.ipv4.srcAddr == 0x0a000101" (node_6) is true with debug table log said * standard_metadata.priority: 07 while Condition "hdr.ipv4.srcAddr == 0x0a000202" (node_3) is true with debug table log said * standard_metadata.priority: 00
I then run iperf command (using UDP) as the same.
I xterm h1 h2 h3 h3 . Both h3 use iperf -s -u -i 1 as a server
And i run iperf -c -u on h1 h2. I am confused about the result that given
One simulation say h1->h3 share the BW with h2->h3 , both occupy half of the BW
But, I re-simulate it again early today , it then show that both of them has no influence by each other
( I use simple_switch_CLI to set queue rate as 1000, but both of the h3 server had same received BW )
Am i using wrong method to run test? Or there's some other mistake i might taken?
BTW, i have to recompile BMv2 for having higher BW . ( taking steps that performance.md given )
Or with log-marco-enable, my mininet will only have very low BW less than 1mbps.
Thanks for your reply again.
Hi, I followed BebeShen's steps 1-4, then used the p4 code from https://github.com/nsg-ethz/p4-learning/tree/master/examples/multiqueueing to give host2 higher priority over host1.
To check if it works I created traffic from host1 to host3 and from host2 to host3 at the same time and used
set_queue_rate 1 in the simple_switch_CLI to make sure the queues are getting filled up. I expected host2's traffic to be prioritized over host1's traffic, but that wasn't the case, the received traffic at host3 was mixed up.
Am I missing something?
My topology is three hosts connected to a single switch.
edit: I noticed in the log file of the switch that every time a packet is processed in the ingress pipeline, it is enqueued in its matching priority queue and immediately dequeued in the egress pipeline. I think this is why I don't see the priority queuing work properly. Can anybody help?