manticoresearch
manticoresearch copied to clipboard
Vector quantization for KNN search
Currently, Manticore uses the HNSW index over floats for its KNN search implementation. That might lead to excessive memory consumption, as all HNSW indexes must be loaded into RAM. One way to improve this is to quantize float vectors into word/byte vectors and use an HNSW index over such vectors. This will only affect memory consumption (and KNN search accuracy), as the original float vectors will still be stored.