How does this compare to Huggingface's Text Embedding Inference?
Hi,
Thank you for your amazing work!
We'd like to add an embedding template for users to deploy on RunPod, and we're deciding between Infinity and HF's Text Embedding Inference. How would you say Infinity compares, especially in performance?
Hey @alpayariyak , great question, tei is a great project that started slightly later than this, and i like it (apart from its license).
bench-marking is pretty subjective, e.g. a single sentence - 10 token query is nothing you should we deploy typically Bert-large on Nvidia L4 instances. Sending batches of ~256 with 380 tokens each, the performance (batch throughput/latency) is likely the only metric you want to care about, since you need to serve under high load to get anything back for your money.
CPU:
CPU is around 3x faster, when using infinity with optimum engine. Candle/torch is not that great at cpu inference, onnx has an edge here.
CUDA:
TEI round 2-5% faster on 0.55 requests per second on TEI vs 0.52 on infinity. You will need to choose the right image for this, and know that e.g. 89 compute capability is what you should go for on Nvidia L4.
startup:
The startup time is slightly faster / same order of magnitute. This is for the GPU image. For roberta large, its similar gap. Docker image of TEI is smaller - torch+cuda is a real heavy weight
Additional features that TEI misses:
- AMD GPUs (no docker image yet, but TEI likley never will), AWS Inf2, mac metal inference
- fast inference on GPU.
- runs custom architectures and any new models with trust_remote_code=True
- caching
- under an open license (MIT)
@alpayariyak Invested like 4-5h on this and set up an extra doc: Can I please have your feedback on it? https://michaelfeil.eu/infinity/latest/benchmarking/
@alpayariyak Invested like 4-5h on this and set up an extra doc: Can I please have your feedback on it? https://michaelfeil.eu/infinity/latest/benchmarking/
The benchmark link seems dead, could you please repost ?
Fixed!
Your project is amazing ! :rocket:
I :heart: your LICENSE that is better respect the one of TEI (:-1:)
Have you ever though to add an API endpoint that can serve as well as TextSplitter ? It would replace the need to load in memory the same model for the text Chunker and the Embedder
https://python.langchain.com/docs/modules/data_connection/document_transformers/split_by_token#sentencetransformers
@Jimmy-Newtron Can you open another issue for that?
#193
Are the integrations into Langchain? What would be the expected usage? To count tokens?
The main goal would be to avoid loading in memory twice the same model
- once to embed the chunks (passages) that is mandatory for the vector store
- the second using the SentenceTransformers splitter that actually loads the same model in memory a second time
Are the integrations into Langchain?
Yes I suppose that a LangChain Integration would be required
What would be the expected usage? To count tokens?
To optimize the resources used (GPU, VRAM) it would be nice to have the Infinity server to be able to chunk long input sequences into smaller sentences that are fitting the window size of the chosen Embed model.
I have found an implementation of a similar concept in the AI21 Studio Text Segmentation that is already available into the LangChain Integrations
Here some source codes that may be of interest to conceive a solution:
great question, tei is a great project that started slightly later than this, and i like it (apart from its license).
https://github.com/huggingface/text-embeddings-inference/issues/232 https://github.com/huggingface/text-embeddings-inference/commit/3c385a4fdced6c526a3ef3ec340e343a2fa40196
Does this means that there will be a convergence of the 2 projects?
Hi @michaelfeil,
Some time has passed, what do you think about the comparison to TEI nowadays?
So far I'm very happy with infinity, was using it for text embedding with GPU. Now for a use case for text classification on CPU, with small load. I'm wondering what solution to take, what is more future proof.
Thanks!
@molntamas Pytorch and especially Onnx have very good optimizations on cpu, which are better than candle.