-
Notifications
You must be signed in to change notification settings - Fork 83
Try to support float16 for flat.cc #876
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jjyaoao The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Welcome @jjyaoao! It looks like this is your first PR to milvus-io/knowhere 🎉 |
@jjyaoao Please associate the related issue to the body of your Pull Request. (eg. “issue: #”) |
Signed-off-by: jjyaoao <[email protected]>
Signed-off-by: jjyaoao <[email protected]>
Hi @jjyaoao Thanks for contributing. We only allow one commit for one PR, can you help squeeze you commit and pass the test first? |
Ok thank you, I will sort out the commit as 1 after passing the test locally. I would like to ask whether the idea of converting the incoming float16 to float32 is correct? |
For now Knowhere's input vector is a |
Thank you for your explanation. I want to undertake OSPP's Milvus support FP16 type vector competition questions, so I am doing some experiments now. |
Aha, plz let me know if I can do any help. Milvus support FP16 is an ambiguous topic:
For the first one, I have to say, it is a little bit complicated since we need to define how to accept FP16 as input from end to end. (Pymilvus->Milvus->Knowhere) |
Thank you, I think it should be the second meaning (because the difficulty of this question is the basis), if I want to modify the 3rdparty lib, what should I do? The instructor Jiao of this question told me that I should investigate Knowhere Index IVF HNSW, etc., choose a simple one to try to support Float16 |
related to #877