faiss icon indicating copy to clipboard operation
faiss copied to clipboard

[want help]getDevice assertion error searching torch.tensor through IDMap on GPU

Open yuanze1024 opened this issue 1 year ago • 1 comments

Summary

I'm using IDMap to wrap a Flat Index on GPU, trying to offer an add_with_ids function. And also, I tried to use pytorch tensor so that I don't need to copy the query embeddings GPU -> CPU -> GPU. So I put import faiss.contrib.torch_utils after import faiss to replace the index function. However, when I search the query vectors, the assertion raises an exception: "GPU tensor on CPU index not allowed" here.

Platform

OS: Ubuntu 20.04.4 in docker container

Faiss version: faiss-gpu 1.7.3

Installed from: Tried both pip wheel ver(https://github.com/kyamagu/faiss-wheels/releases/download/v1.7.3/faiss_gpu-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl) and conda ver(conda install faiss-gpu=1.7.3 -c pytorch).

Faiss compilation options: Didn't compile it myself.

Running on:

  • [ ] CPU
  • [x] GPU

Interface:

  • [ ] C++
  • [x] Python

Reproduction instructions

import faiss

index_flat = faiss.IndexFlatL2(16)
gpu_index_flat = faiss.index_cpu_to_gpu(faiss.StandardGpuResources(), 0, index_flat)
print(gpu_index_flat.getDevice()) # This will return 0 for me because I only allocate one GPU.

wrap = faiss.IndexIDMap(index_flat)
wrap = faiss.index_cpu_to_gpu(faiss.StandardGpuResources(), 0, wrap)
print(wrap.getDevice()) # wrap doesn't have such an attribute, and that's the reason for this error.

My point is to save the time copying query vectors from GPU to CPU to GPU, so if there is any way to do that may also help.

I have also tried faiss-gpu v1.7.4 and a different environment but have reproduced the same problem, so I think maybe I'm using it wrongly. Anyone can help? If I didn't make it clear, please let me know and I'll reply as fast as possible.

yuanze1024 avatar Sep 13 '23 11:09 yuanze1024