RAGatouille icon indicating copy to clipboard operation
RAGatouille copied to clipboard

README Indexing fails on two GPUs

Open bclavie opened this issue 2 years ago • 12 comments

I'm not sure if the problem is related to Colab, I also have an error using Jupyter locally on my Ubuntu server. The basic readme.md example doesn't work and the cell never finish executing.

Here's the code and stacktrace if that helps:

from ragatouille import RAGPretrainedModel

RAG = RAGPretrainedModel.from_pretrained("colbert-ir/colbertv2.0")
my_documents = [
    "This is a great excerpt from my wealth of documents",
    "Once upon a time, there was a great document"
]

index_path = RAG.index(index_name="my_index", collection=my_documents)

output the following:

[Jan 06, 10:41:35] #> Creating directory .ragatouille/colbert/indexes/my_index 


#> Starting...
#> Starting...
nranks = 2 	 num_gpus = 2 	 device=1
[Jan 06, 10:41:38] [1] 		 #> Encoding 0 passages..
nranks = 2 	 num_gpus = 2 	 device=0
[Jan 06, 10:41:38] [0] 		 #> Encoding 2 passages..
 File "/home/np/miniconda3/envs/np-ml/lib/python3.10/site-packages/colbert/indexing/collection_indexer.py", line 101, in setup
    avg_doclen_est = self._sample_embeddings(sampled_pids)
  File "/home/np/miniconda3/envs/np-ml/lib/python3.10/site-packages/colbert/indexing/collection_indexer.py", line 140, in _sample_embeddings
    self.num_sample_embs = torch.tensor([local_sample_embs.size(0)]).cuda()
AttributeError: 'NoneType' object has no attribute 'size'

Originally posted by @timothepearce in https://github.com/bclavie/RAGatouille/issues/14#issuecomment-1879636583

bclavie avatar Jan 06 '24 11:01 bclavie

Hey @timothepearce, I've created the issue here!

I think this is what's going on:

The README examples are too short. I'll update shortly to make sure the doc collections are big enough.

I spy in your trace that you're using 2 GPUs (num_gpus = 2). The fact that the embedding sample ends up being a NoneType object is probably because upstream ColBERT is trying to split the document collection into batches for both GPUs then fails because there aren't enough, but still creates an empty batch.

Does it work if you use more examples?

bclavie avatar Jan 06 '24 11:01 bclavie

I've just merged #18 and pushed a fixed version to pypi (to add the wikipedia page fetcher), the readme example should be a lot more functional now!

bclavie avatar Jan 06 '24 11:01 bclavie

That was quick! I was inspecting the source code while you were fixing it. Nice job!

I'm struggling with another issue (not related to your package), but I'll keep you informed.

timothepearce avatar Jan 06 '24 13:01 timothepearce

Thanks, glad I could fix it for you!

bclavie avatar Jan 06 '24 13:01 bclavie

I'm struggling with another issue (not related to your package), but I'll keep you informed.

Oh sorry I glanced over that -- let me know if it's something I can assist with!

bclavie avatar Jan 06 '24 13:01 bclavie

@bclavie, the 0.0.2b version isn't available on PyPI, but the code works as I tested it by cloning the repo instead.

timothepearce avatar Jan 06 '24 14:01 timothepearce

My bad, seems like poetry silently crashed during publish... Live on PyPi now!

bclavie avatar Jan 06 '24 14:01 bclavie

@bclavie not a bug, but to carry out some benchmarks, I indexed 1000 documents and noticed that the library currently only uses one GPU at a time but loads the embedding model on both devices.

[Jan 06, 15:23:17] #> Creating directory .ragatouille/colbert/indexes/presentation_1000 

#> Starting...
#> Starting...
nranks = 2 	 num_gpus = 2 	 device=1
[Jan 06, 15:23:21] [1] 		 #> Encoding 17079 passages..
nranks = 2 	 num_gpus = 2 	 device=0
[Jan 06, 15:23:21] [0] 		 #> Encoding 31537 passages..
[Jan 06, 15:23:52] [0] 		 avg_doclen_est = 99.43394470214844 	 len(local_sample) = 31,537
[Jan 06, 15:23:52] [1] 		 avg_doclen_est = 99.43394470214844 	 len(local_sample) = 17,079
[Jan 06, 15:23:52] [0] 		 Creating 32,768 partitions.
[Jan 06, 15:23:52] [0] 		 *Estimated* 7,650,049 embeddings.
[Jan 06, 15:23:52] [0] 		 #> Saving the indexing plan to .ragatouille/colbert/indexes/presentation_1000/plan.json ..
Clustering 4783720 points in 128D to 32768 clusters, redo 1 times, 20 iterations
  Preprocessing in 0.14 s
  Iteration 0 (696.46 s, search 696.33 s): objective=1.51976e+06 imbalance=1.742 nsplit=0

Here is the output of nvidia-smi:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 4090        Off | 00000000:01:00.0 Off |                  Off |
|  0%   38C    P8              16W / 450W |   1036MiB / 24564MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce RTX 4090        Off | 00000000:03:00.0 Off |                  Off |
| 30%   31C    P2              67W / 450W |   2616MiB / 24564MiB |    100%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A   1654032      C   ...np/miniconda3/envs/np-ml/bin/python     1026MiB |
|    1   N/A  N/A   1654070      C   ...np/miniconda3/envs/np-ml/bin/python     2600MiB |
+---------------------------------------------------------------------------------------+

Do you know how I can optimise the embedding/indexing phase?

timothepearce avatar Jan 06 '24 15:01 timothepearce

Oh this is interesting, thanks for flagging it! For the indexing part, it's fully deferred to ColBERT itself (the Stanford's colbert-ai lab), but I'll add on my to-do to dig and make sure that the full GPU settings are properly passed to it.

Overall, sadly indexing can be quite slow (it's by far the slowest part of ColBERT).

bclavie avatar Jan 06 '24 16:01 bclavie

@bclavie Yes, I noticed this is one of the main disadvantages compared to dense embedding models. The 1000 documents took almost 4 hours to process, but the CPU was the bottleneck, not the GPU (even with only one running).

Do you have any QPS benchmarks and memory footprint compared to the number of vectors indexed?

For my use case, which consists of indexing several million documents, ColBERT is probably a better choice as a reranker. Given the number of vectors, I wouldn't be surprised if queries were slower than more traditional methods.

Thanks for all your hard work, ColBERT has always been challenging to use!

timothepearce avatar Jan 06 '24 19:01 timothepearce

@bclavie Yes, I noticed this is one of the main disadvantages compared to dense embedding models. The 1000 documents took almost 4 hours to process, but the CPU was the bottleneck, not the GPU (even with only one running). Do you have any QPS benchmarks and memory footprint compared to the number of vectors indexed?

cc @okhat

For my use case, which consists of indexing several million documents, ColBERT is probably a better choice as a reranker.

That's fair! I'm planning on looking at building RAGPretrainedModel.rerank(query: str, documents: list[str]) soon to support index-free re-ranking, just pass a query + a list of strings (as suggested in #6). If you're interested, I'll ping you when it ships.

Given the number of vectors, I wouldn't be surprised if queries were slower than more traditional methods.

I'm pretty sure once indexed (that is indeed a challenging task), ColBERT would still query super fast, but would be worth double checking!

Thanks for all your hard work, ColBERT has always been challenging to use!

Thank you, I'm glad this has been useful to you!

bclavie avatar Jan 06 '24 19:01 bclavie

If you're interested, I'll ping you when it ships.

Please yes!

I'm pretty sure once indexed (that is indeed a challenging task), ColBERT would still query super fast, but would be worth double checking!

I'm still working on RAGatouille. I'll run some benchmarks on a more extensive dataset and post them here if you're interested.

timothepearce avatar Jan 06 '24 19:01 timothepearce

I'm still working on RAGatouille. I'll run some benchmarks on a more extensive dataset and post them here if you're interested.

Would love the yes! All early feedback is more than welcome, thank you!

I'll close the issue for now (to keep tracks of bug) but feel free to post it here (I'll ping you on the reranker issue once that's live)

bclavie avatar Jan 07 '24 14:01 bclavie