Xdp hot cache
Small optimization for well-behaved setups that are correctly mapping their IP addresses (and a tiny slowdown for IPs that aren't). Does nothing at all for "on a stick" configurations:
- Adds a
TRACINGdefine to the XDP/TC kernels, allowing logging totrace_pipeof execution times. Not very accurate. - Adds a
HOT_CACHEdefine tolpm.h, setting this enables the hot cache. - When an LPM check is required, the "hot cache" is checked first. If a hit occurs, it is returned immediately. if it doesn't occur, the IP/result is added to the hot cache.
- The Hot Cache is an LRU type, so it will expire old entries - keeping the "hot" ones over time.
- Added cache invalidation; the hot cache is invalidated whenever the IP map is edited.
My limited testing indicated a decent speed-up per-packet, although I don't trust the nanosecond measurements being emitted from the kernel - I think the clock doesn't update often enough.
Note that this is in draft and should stay that way until I've tweaked the map size and added some negative caching. The negative caching in particular should make a (relatively, we're still talking microseconds) huge difference because they are the most expensive LPM lookup.
Ok, negative cache and reasonable size limits are in place. Robert tested and reports a 30% reduction in CPU usage to maintain the same bandwidth levels. So I'm calling this one a win. :-)
Just confirmed that this works with on-a-stick mode (on Payne) also. :-)
Maaaaan, awesome performance impact! This brings me to mind-state: "I can no wait to try". In Poland we said "bathed in hot water".
I will get performance graphs presenting before and after update to v1.5 as soon as it could be possible.
Maaaaan, awesome performance impact! This brings me to mind-state: "I can no wait to try". In Poland we said "bathed in hot water".
I will get performance graphs presenting before and after update to v1.5 as soon as it could be possible.
Please do! We can’t wait
Small bug: Web UI throughput graph locks up with this branch
Small bug: Web UI throughput graph locks up with this branch
Scale is also improper.
For the scale fix, it's pretty funny. The Plotly docs say to specify exponentformat: "Si". We tried that over and over before and it doesn't work. It turns out that using SI (caps) does work. Bad documentation, no donut.
Small bug: Web UI throughput graph locks up with this branch