Simon Hardy-Francis
Simon Hardy-Francis
Found this [1] which says "cpus that have been allocated resources and can be brought online if they are present.". Also, this command gives the expected number of CPUs used:...
@magnus-karlsson Thanks very much for commenting! "When you get packet loss, it is usually because the driver has not gotten enough buffers to work with from user space." In this...
@chaudron Thanks very much for commenting! ``` $ cat ../libbpf/src/xsk.h | egrep --context 3 DEFAULT_NUM_DESCS LIBBPF_API int xsk_umem__fd(const struct xsk_umem *umem); LIBBPF_API int xsk_socket__fd(const struct xsk_socket *xsk); #define XSK_RING_CONS__DEFAULT_NUM_DESCS 2048...
"Is someone able to reproduce my example using a very basic XDP kernel program" @Bresenham: Using the same steps as detailed above (i.e. the standard tutorial XDP kernel code which...
@magnus-karlsson thanks for the comments! > Good question, but unfortunately I do not know the answer. Can veth drop it > and account for this in some way? In any...
@magnus-karlsson I tried out some more things: Even concurrently tcpreplaying the first 150 packets of two pcap files provokes the bug. Please see below. I also used unique pcap files...
I'm reaching out to @netoptimizer, @tmakita, and @borkmann for help with the above veth and/or XDP packet loss issue [1]. How did I come to list you three? Well I'm...
@tohojo Thanks for the quick response! What's the reason for limiting packet size to a single memory page? And are there any plans in the future to remove this limit?...
On a related note, I noticed that the advanced03 AF_XDP tutorial divides the UMEM into single memory page sized elements of 4,096 bytes on my box, as you suggested with...
I also got the same BTF issue with a newer kernel: ``` $ clang --version clang version 9.0.0-2 (tags/RELEASE_900/final) Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/bin $ uname -a Linux...