Begin migrate ScalarQuantizer to simdlib
Summary: As a demo for Mengdi.
The steps to fully migrate to simdlib are:
-
change all function interfaces to use the generic simd8float32 and friends prototypes -- make sure it compiles on fbcode.
-
make sure it also compiles on ARM
-
see which functions can be mirgrated to only use the generic codepath
-
benchmark if the simd emulated path is competitve with the scalar (for platforms without specific SIMD support)
Differential Revision: D59395882
This pull request was exported from Phabricator. Differential Revision: D59395882
This pull request was exported from Phabricator. Differential Revision: D59395882
This pull request was exported from Phabricator. Differential Revision: D59395882
@mdouze Do you have any plans to support ARM SVE, if possible? The primary problem of simdlib with ARM SVE is that it implies SIMD registers of a variable size. Technically, there are two the popular models on the market: Amazon Graviton 3 with SIMD width 256b and an upcoming Graviton 4 with SIMD with 512b, so maybe one could stick with 256 bits for now.
@alexanderguzhva IMO it would be great to support SVE. What I don't understand is if the SVE size needs to be known at compile time. In that case, we could just add it as another SIMD compile for the 256 and 512 versions.
@mdouze Yes, the SVE size is known at the compile time. Usually, it is done via svcntb() instruction. The PROBLEM is that for x86 you can have registers, such as __m256, to be a part of a class or struct, but you cannot have SVE registers such as svuint8_t to be so. This will trigger a compiler error O_o. So, you will have to use workarounds, such as keeping std::uint8_t tmp[16]; inside your simdlib for SVE256, and do loads / stores between a register and a buffer. I'm not sure how compiler will be able to optimize it, I hope it will be.
what is the status of this diff? Should I wait before I bring some updates to ScalarQuantizer?
@alexanderguzhva I'm starting to work on this but it's gonna take some time. If you want to make your changes in now, feel free to and I can work on refactoring later down the line
@mengdilin any time estimates on your end? Basically, are you in a stage where you know what to do exactly or are you in a research stage?
@alexanderguzhva I think I can finish up AVX2/Neon in ScalarQuantizer around October (have other work items at hand atm). My understanding here is I should move the respective parts of AVX2 and Neon code in ScalarQuantizer into faiss/utils/simdlib_avx2.h and faiss/utils/simdlib_neon.h as part of my SIMD ramp-up. I've made some progress on the refactor, but I have not thought about how simdlib can be extended to support SVE. Before committing my progress, I'm building out a performance regression test suites that can ensure my changes don't introduce regressions across AVX2, Neon, and no optimizations.
I'm a SIMD noob here. Let me know if I'm moving in the right direction for the refactor or if I'm missing anything major.
@mdouze @alexanderguzhva I found it now, so I comment about above discussion:
an upcoming Graviton 4 with SIMD with 512b
Graviton4 has 128bit SVE registers:
user@ip-172-31-xx-xx:/tmp$ cat test.cpp
#include<iostream>
#include<arm_sve.h>
int main(){
std::cout << svcntb()*8 << std::endl;
}
user@ip-172-31-xx-xx:/tmp$ g++ -march=armv9-a+sve2 -otest test.cpp
user@ip-172-31-xx-xx:/tmp$ ./test
128
What I don't understand is if the SVE size needs to be known at compile time.
Yes, the SVE size is known at the compile time. Usually, it is done via
svcntb()instruction. The PROBLEM is that for x86 you can have registers, such as__m256, to be a part of aclassorstruct, but you cannot have SVE registers such assvuint8_tto be so. This will trigger a compiler error O_o.
Let's summarize the information around this:
svcntb()can't be called at compile time, because the function is notconstexpr.- Usually, the SVE registers size can't be detected at compile time
- Because the size is different for each CPUs, for example Graviton3 has 256bit and Graviton4 has 128bit
- We can execute same binary on each CPUs because the binary detects SVE register length at run time
- Thus,
sizeof(svuint8_t)is not determined at compile time. This causes:- We can't arithmetic operate to a pointer of SVE register
- Because (to simplify to the extereme)
ptr + 1means thatreinterpret_cast<char*>(ptr) + sizeof(*ptr)
- Because (to simplify to the extereme)
- We can't create an array of SVE registers
- Because
arr[1]means that*(arr + 1). Again, we can't arithmetic operate to the pointer.
- Because
- We can't contain SVE registers into
class-
Because
obj.membermeans thatobj->*(&klass::member). Member object pointer is caluculated with the offset from a head of a class at compile time, but anyone can't caluculate the unsized offset.struct S{ svint8_t x; // start from 0byte to ???byte of S int y; // start from... where? };
-
- We can't arithmetic operate to a pointer of SVE register
- So, programming the abstracted code with SVE needs some techniques. Although some abstraction is possible to make excellent use of C++ templates, the appearance of the code is quite complicated than the tradisional SIMD codes.
- Actually, you can fix the register size of SVE for your code with
-msve-vector-bits=optionsvcntb()can't be called at compile time even if this case, but we can detect the SVE vector length with__ARM_FEATURE_SVE_BITSmacro- When this option is passed to compiler, SVE register types will be sized types.
- Thus
sizeof(svuint8_t)will be enabled and there will be no limitation in programming.
- Thus
- However, the option makes the binary unportable across CPUs that have different SVE vector length
- This means that we need to compile the binaries for each vector length
- see more info at here
I've tried to make simdlib supporting SVE, but as you know that is extremely hard job. For the time being, it's better to write SVE code without much abstraction IMHO. If the package file size bloat is acceptable, fixing the vector length is an alternative.
@subhadeepkaran is working on a stack of changes with dynamic dispatch of simd instructions. @mdouze completed this item already. I will close this one.