Skip to content
This repository has been archived by the owner on Aug 16, 2023. It is now read-only.

Try to support float16 for flat.cc #876

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

jjyaoao
Copy link

@jjyaoao jjyaoao commented May 13, 2023

related to #877

@sre-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jjyaoao
To complete the pull request process, please assign presburger after the PR has been reviewed.
You can assign the PR to them by writing /assign @presburger in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@sre-ci-robot
Copy link

Welcome @jjyaoao! It looks like this is your first PR to milvus-io/knowhere 🎉

@mergify mergify bot added the needs-dco DCO is missing in this pull request. label May 13, 2023
@mergify
Copy link

mergify bot commented May 13, 2023

@jjyaoao Please associate the related issue to the body of your Pull Request. (eg. “issue: #”)

@mergify mergify bot added dco-passed DCO check passed. and removed needs-dco DCO is missing in this pull request. do-not-merge/missing-related-issue labels May 13, 2023
Signed-off-by: jjyaoao <[email protected]>
@liliu-z
Copy link
Member

liliu-z commented May 15, 2023

Hi @jjyaoao Thanks for contributing. We only allow one commit for one PR, can you help squeeze you commit and pass the test first?

@jjyaoao
Copy link
Author

jjyaoao commented May 15, 2023

Hi @jjyaoao Thanks for contributing. We only allow one commit for one PR, can you help squeeze you commit and pass the test first?

Ok thank you, I will sort out the commit as 1 after passing the test locally. I would like to ask whether the idea of converting the incoming float16 to float32 is correct?

@liliu-z
Copy link
Member

liliu-z commented May 16, 2023

Hi @jjyaoao Thanks for contributing. We only allow one commit for one PR, can you help squeeze you commit and pass the test first?

Ok thank you, I will sort out the commit as 1 after passing the test locally. I would like to ask whether the idea of converting the incoming float16 to float32 is correct?

For now Knowhere's input vector is a void* and by default will be transfer to a float32. Looks like this PR is trying to support using fp16 as Knowhere's input type, and can I ask if you plan to use it with Milvus? If so this will never work since Milvus doesn't support fp16 yet.

@jjyaoao
Copy link
Author

jjyaoao commented May 16, 2023

Hi @jjyaoao Thanks for contributing. We only allow one commit for one PR, can you help squeeze you commit and pass the test first?

Ok thank you, I will sort out the commit as 1 after passing the test locally. I would like to ask whether the idea of converting the incoming float16 to float32 is correct?

For now Knowhere's input vector is a void* and by default will be transfer to a float32. Looks like this PR is trying to support using fp16 as Knowhere's input type, and can I ask if you plan to use it with Milvus? If so this will never work since Milvus doesn't support fp16 yet.

Thank you for your explanation. I want to undertake OSPP's Milvus support FP16 type vector competition questions, so I am doing some experiments now.

@liliu-z
Copy link
Member

liliu-z commented May 16, 2023

Aha, plz let me know if I can do any help. Milvus support FP16 is an ambiguous topic:

  1. It can mean support using FP16 as input type of Milvus
  2. Or it can mean do calculation in FP16 in Knowhere, and still FP32 as input of Milvus.

For the first one, I have to say, it is a little bit complicated since we need to define how to accept FP16 as input from end to end. (Pymilvus->Milvus->Knowhere)
For the second one, we need to modify the 3rdparty lib to support FP16.

@jjyaoao
Copy link
Author

jjyaoao commented May 16, 2023

Aha, plz let me know if I can do any help. Milvus support FP16 is an ambiguous topic:

  1. It can mean support using FP16 as input type of Milvus
  2. Or it can mean do calculation in FP16 in Knowhere, and still FP32 as input of Milvus.

For the first one, I have to say, it is a little bit complicated since we need to define how to accept FP16 as input from end to end. (Pymilvus->Milvus->Knowhere) For the second one, we need to modify the 3rdparty lib to support FP16.

Thank you, I think it should be the second meaning (because the difficulty of this question is the basis), if I want to modify the 3rdparty lib, what should I do? The instructor Jiao of this question told me that I should investigate Knowhere Index IVF HNSW, etc., choose a simple one to try to support Float16

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
dco-passed DCO check passed. size/XXL
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants