neural-speed icon indicating copy to clipboard operation
neural-speed copied to clipboard

Distributing tensors across NUMA nodes

Open shg8 opened this issue 1 year ago • 3 comments

I'm wondering how much support Neural Speed has for NUMA systems. The Advanced Usage page suggests that all tensors should be allocated on the first NUMA node numactl -m 0 -C 0-<physic_cores-1>. Is there any benefit to doing this?

shg8 avatar Apr 06 '24 00:04 shg8

Without numa, the performance will drop a lot

kevinintel avatar Apr 15 '24 01:04 kevinintel

Without numa, the performance will drop a lot

I previously thought that this binds all memory allocations to the first NUMA node. However, this would increase internode traffic significantly. Additionally, each thread isn't able to fully utilize the memory bandwidth if the topology has different memory affinities for different nodes. Is my understanding correct? Could you kindly add a bit more to why we're not interleaving the memory allocations?

shg8 avatar Apr 15 '24 02:04 shg8

Intel Xeon offen has 2 sockets, -m 0 aimed to bind the memory in first socket. There are overhead of communcation between 2 sockets, if you want to reduce internode, you can try our TP.

kevinintel avatar Apr 17 '24 03:04 kevinintel