knowhere
knowhere copied to clipboard
Try to support float16 for flat.cc
related to #877
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: jjyaoao
To complete the pull request process, please assign presburger after the PR has been reviewed.
You can assign the PR to them by writing /assign @presburger
in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
Welcome @jjyaoao! It looks like this is your first PR to milvus-io/knowhere 🎉
@jjyaoao Please associate the related issue to the body of your Pull Request. (eg. “issue: #
Hi @jjyaoao Thanks for contributing. We only allow one commit for one PR, can you help squeeze you commit and pass the test first?
Hi @jjyaoao Thanks for contributing. We only allow one commit for one PR, can you help squeeze you commit and pass the test first?
Ok thank you, I will sort out the commit as 1 after passing the test locally. I would like to ask whether the idea of converting the incoming float16 to float32 is correct?
Hi @jjyaoao Thanks for contributing. We only allow one commit for one PR, can you help squeeze you commit and pass the test first?
Ok thank you, I will sort out the commit as 1 after passing the test locally. I would like to ask whether the idea of converting the incoming float16 to float32 is correct?
For now Knowhere's input vector is a void*
and by default will be transfer to a float32
. Looks like this PR is trying to support using fp16 as Knowhere's input type, and can I ask if you plan to use it with Milvus? If so this will never work since Milvus doesn't support fp16 yet.
Hi @jjyaoao Thanks for contributing. We only allow one commit for one PR, can you help squeeze you commit and pass the test first?
Ok thank you, I will sort out the commit as 1 after passing the test locally. I would like to ask whether the idea of converting the incoming float16 to float32 is correct?
For now Knowhere's input vector is a
void*
and by default will be transfer to afloat32
. Looks like this PR is trying to support using fp16 as Knowhere's input type, and can I ask if you plan to use it with Milvus? If so this will never work since Milvus doesn't support fp16 yet.
Thank you for your explanation. I want to undertake OSPP's Milvus support FP16 type vector competition questions, so I am doing some experiments now.
Aha, plz let me know if I can do any help. Milvus support FP16 is an ambiguous topic:
- It can mean support using FP16 as input type of Milvus
- Or it can mean do calculation in FP16 in Knowhere, and still FP32 as input of Milvus.
For the first one, I have to say, it is a little bit complicated since we need to define how to accept FP16 as input from end to end. (Pymilvus->Milvus->Knowhere) For the second one, we need to modify the 3rdparty lib to support FP16.
Aha, plz let me know if I can do any help. Milvus support FP16 is an ambiguous topic:
- It can mean support using FP16 as input type of Milvus
- Or it can mean do calculation in FP16 in Knowhere, and still FP32 as input of Milvus.
For the first one, I have to say, it is a little bit complicated since we need to define how to accept FP16 as input from end to end. (Pymilvus->Milvus->Knowhere) For the second one, we need to modify the 3rdparty lib to support FP16.
Thank you, I think it should be the second meaning (because the difficulty of this question is the basis), if I want to modify the 3rdparty lib, what should I do? The instructor Jiao of this question told me that I should investigate Knowhere Index IVF HNSW, etc., choose a simple one to try to support Float16