milvus icon indicating copy to clipboard operation
milvus copied to clipboard

<MilvusException: (code=65535, message=empty sparse float vector row)>

Open shilei4260 opened this issue 9 months ago • 23 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

Environment

- Milvus version:
- Deployment mode(standalone or cluster):
- MQ type(rocksmq, pulsar or kafka):    
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS): 
- CPU/Memory: 
- GPU: 
- Others:

Current Behavior

稀疏和密集向量时出现的报错https://github.com/milvus-io/pymilvus/blob/master/examples/hello_hybrid_sparse_dense.py

Expected Behavior

No response

Steps To Reproduce

No response

Milvus Log

No response

Anything else?

No response

shilei4260 avatar May 11 '24 05:05 shilei4260

@shilei4260 which version are you running for Milvus? please offer milvus logs for investigation, thx /assign @shilei4260 /unassign

yanliang567 avatar May 11 '24 06:05 yanliang567

what model you are using? random or M3?

xiaofan-luan avatar May 11 '24 07:05 xiaofan-luan

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

stale[bot] avatar Jun 11 '24 03:06 stale[bot]

hi, I met a similar error. I use bm25 embedding function, I use encode_queries function: sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) but sparse_embedding is return empty. Why is that? the bm25_ef is def bm25_ef(self): bm = BM25EmbeddingFunction(build_default_analyzer(language="zh")) bm.load("bm25_params.json") return bm. Note that my input query is "图片尺寸", I think bm25 tokenizer, aka, default_analyzer should split it to "图片" and "尺寸". I can find the ascii code for "图片" and "尺寸" in the bm25_params.json. I think the problem is the default analyzer does not tokenize my query.

xxxfzxxx avatar Jun 26 '24 08:06 xxxfzxxx

urgent

xxxfzxxx avatar Jun 26 '24 08:06 xxxfzxxx

@xxxfzxxx I'm checking this issue.

wxywb avatar Jun 26 '24 10:06 wxywb

@xxxfzxxx your observation is correct.

from pymilvus.model.sparse.bm25.tokenizers import build_default_analyzer
from pymilvus.model.sparse import BM25EmbeddingFunction
import jieba

analyzer = build_default_analyzer(language="zh")

corpus = [
   "在登记册上所有的图片尺寸需要保持一致"
]

# analyzer can tokenize the text into tokens
tokens = analyzer(corpus[0])
print(analyzer.tokenizer.__dict__)
print("tokens:", tokens)

tokens: ['登记册', '上', '图片尺寸', '保持一致']

The popular Chinese tokenizer project jieba used by this implementation will not split '图片尺寸' in two words. However jieba supports adjusting its vocabulary by user. You can create a new file called custom.txt

图片 10000
尺寸 10000
from pymilvus.model.sparse.bm25.tokenizers import build_default_analyzer
from pymilvus.model.sparse import BM25EmbeddingFunction
import jieba

jieba.load_userdict("./custom.txt")
analyzer = build_default_analyzer(language="zh")

corpus = [
   "在登记册上所有的图片尺寸需要保持一致"
]

# analyzer can tokenize the text into tokens
tokens = analyzer(corpus[0])
print(analyzer.tokenizer.__dict__)
print("tokens:", tokens)
tokens: ['登记册', '上', '图片', '尺寸', '保持一致']

wxywb avatar Jun 26 '24 11:06 wxywb

Adjusting jieba vocab cannot handle all corner cases. At lease we could have a naive method.

from pymilvus.model.sparse.bm25.tokenizers import build_default_analyzer
from pymilvus.model.sparse import BM25EmbeddingFunction

class SimpleChineseTokenizer():
    def __init__(self):
        pass

    def tokenize(self, text: str):
        return list(text)


analyzer = build_default_analyzer(language="zh")
analyzer.tokenizer = SimpleChineseTokenizer()

corpus = [
   "在登记册上所有的图片尺寸需要保持一致"
]

# analyzer can tokenize the text into tokens
tokens = analyzer(corpus[0])
print(analyzer.tokenizer.__dict__)
print("tokens:", tokens)
tokens: ['登', '记', '册', '上', '图', '片', '尺', '寸', '需', '保', '持', '致']

wxywb avatar Jun 26 '24 11:06 wxywb

I wonder how the milvus builtin bm25embeddingFunction will embed the unseen word in the query? From my observation, it will give nothing(None). What is the best solution if the tokens in the query does not occur in the previous bm25 tokens dict?

xxxfzxxx avatar Jun 26 '24 11:06 xxxfzxxx

@xxxfzxxx ,bm25 in this implementation will calculate the statistics(term frequencies, idfs) over tokenized words in documents. If a word tokenized in query not seem in documents then it would contribute nothing to the relevance score. If you have such concerns, I think the best strategy is tokenizing Chinese sentences into single characters. For English, you need to tokenize them into subwords(like GPT's BPE tokens).

wxywb avatar Jun 26 '24 11:06 wxywb

hi, I met a similar error. I use bm25 embedding function, I use encode_queries function: sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) but sparse_embedding is return empty. Why is that? the bm25_ef is def bm25_ef(self): bm = BM25EmbeddingFunction(build_default_analyzer(language="zh")) bm.load("bm25_params.json") return bm. Note that my input query is "图片尺寸", I think bm25 tokenizer, aka, default_analyzer should split it to "图片" and "尺寸". I can find the ascii code for "图片" and "尺寸" in the bm25_params.json. I think the problem is the default analyzer does not tokenize my query.

Do you mean you get a zero-size sparse embedding or a sparse embedding all with zeros(size equals your len(idf)).

wxywb avatar Jun 26 '24 12:06 wxywb

if corpus don't have this word, you will get 0 in this dimension. becasue no corpus will match this word

xiaofan-luan avatar Jun 26 '24 13:06 xiaofan-luan

hi, I met a similar error. I use bm25 embedding function, I use encode_queries function: sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) but sparse_embedding is return empty. Why is that? the bm25_ef is def bm25_ef(self): bm = BM25EmbeddingFunction(build_default_analyzer(language="zh")) bm.load("bm25_params.json") return bm. Note that my input query is "图片尺寸", I think bm25 tokenizer, aka, default_analyzer should split it to "图片" and "尺寸". I can find the ascii code for "图片" and "尺寸" in the bm25_params.json. I think the problem is the default analyzer does not tokenize my query.

Do you mean you get a zero-size sparse embedding or a sparse embedding all with zeros(size equals your len(idf)).

Yes, I print the "图片尺寸“ sparse embedding and it output nothing. It should give me a csr matrix right?

xxxfzxxx avatar Jun 27 '24 02:06 xxxfzxxx

hi, I met a similar error. I use bm25 embedding function, I use encode_queries function: sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) but sparse_embedding is return empty. Why is that? the bm25_ef is def bm25_ef(self): bm = BM25EmbeddingFunction(build_default_analyzer(language="zh")) bm.load("bm25_params.json") return bm. Note that my input query is "图片尺寸", I think bm25 tokenizer, aka, default_analyzer should split it to "图片" and "尺寸". I can find the ascii code for "图片" and "尺寸" in the bm25_params.json. I think the problem is the default analyzer does not tokenize my query.

Do you mean you get a zero-size sparse embedding or a sparse embedding all with zeros(size equals your len(idf)).

Yes, I print the "图片尺寸“ sparse embedding and it output nothing. It should give me a csr matrix right?

please show me your full code

wxywb avatar Jun 27 '24 02:06 wxywb

`dense_embeddings = [self.bgem3_model.get_embedding([query])[0]['dense_vecs']] rewritten_query = self.get_query_rewrite(query) sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) col = Collection(name=collection_name) col.load() search_param_dense = { "data": dense_embeddings, "anns_field": "dense_vector", "param": { "metric_type": "COSINE", "params": {"nprobe": 10} }, "limit": 100
} search_param_sparse = { "data": sparse_embeddings, "anns_field": "sparse_vector", "param": { "metric_type": "IP", "params": {"nprobe": 10} }, "limit": 100 # TODO } request_dense = AnnSearchRequest(**search_param_dense) request_sparse = AnnSearchRequest(**search_param_sparse)

    reqs = [request_dense, request_sparse]
    weighted_rerank = WeightedRanker(dense_weight, 1 - dense_weight)

    res = col.hybrid_search(
        reqs,
        weighted_rerank,
        limit=retrieved_cnt,
        output_fields=['doc_id', 'text', 'metadata']
    )`

hi, I met a similar error. I use bm25 embedding function, I use encode_queries function: sparse_embeddings = self.bm25_ef.encode_queries([rewritten_query]) but sparse_embedding is return empty. Why is that? the bm25_ef is def bm25_ef(self): bm = BM25EmbeddingFunction(build_default_analyzer(language="zh")) bm.load("bm25_params.json") return bm. Note that my input query is "图片尺寸", I think bm25 tokenizer, aka, default_analyzer should split it to "图片" and "尺寸". I can find the ascii code for "图片" and "尺寸" in the bm25_params.json. I think the problem is the default analyzer does not tokenize my query.

Do you mean you get a zero-size sparse embedding or a sparse embedding all with zeros(size equals your len(idf)).

Yes, I print the "图片尺寸“ sparse embedding and it output nothing. It should give me a csr matrix right?

please show me your full code

xxxfzxxx avatar Jun 27 '24 04:06 xxxfzxxx

I wonder how you get the None sparse embedding. https://github.com/milvus-io/milvus-model/blob/d812c9a84f2c530919ddffec8bf4024cce841e6b/milvus_model/sparse/bm25/bm25.py#L130 you can get a csr_array even you have an empty self.idf.

wxywb avatar Jun 27 '24 05:06 wxywb

I wonder how you get the None sparse embedding. https://github.com/milvus-io/milvus-model/blob/d812c9a84f2c530919ddffec8bf4024cce841e6b/milvus_model/sparse/bm25/bm25.py#L130 you can get a csr_array even you have an empty self.idf.

My bad, I check the type of the sparse_embeddings print(">>>>", type(sparse_embeddings), sparse_embeddings) and the output is >>>> <class 'scipy.sparse._csr.csr_matrix'> meaning that sparse embedding is an csr_matrix. Since all values in the matrix are zeros so it does not print anything.

Then, how do I search for it, can you tell me how to update my hybrid search? `search_param_dense = { "data": dense_embeddings, "anns_field": "dense_vector", "param": { "metric_type": "COSINE", "params": {"nprobe": 10} }, "limit": 100
} search_param_sparse = { "data": sparse_embeddings, "anns_field": "sparse_vector", "param": { "metric_type": "IP", "params": {"nprobe": 10} }, "limit": 100 # TODO } request_dense = AnnSearchRequest(**search_param_dense) request_sparse = AnnSearchRequest(**search_param_sparse)

    reqs = [request_dense, request_sparse]
    weighted_rerank = WeightedRanker(dense_weight, 1 - dense_weight)

    res = col.hybrid_search(
        reqs,
        weighted_rerank,
        limit=retrieved_cnt,
        output_fields=['doc_id', 'text', 'metadata']
    )`

raise MilvusException(status.code, status.reason, status.error_code) pymilvus.exceptions.MilvusException: <MilvusException: (code=65535, message=fail to search on QueryNode 33: worker(33) query failed: Assert "size > 0" at /go/src/github.com/milvus-io/milvus/internal/core/src/common/Utils.h:227 => Sparse row data should not be empty)>

xxxfzxxx avatar Jun 27 '24 06:06 xxxfzxxx

@xxxfzxxx Your sparse embeddings seem to have zero length. Using following code to verify this.

print(sparse_embeddings.toarray().shape)

I think it will be a 0-length sparse embedding. Then you need to verify your bm25 idf, by.

print('elements in idf:', len(bm25_ef.idf))

It shouldn't be empty if you have fitted your corpus.

wxywb avatar Jun 27 '24 06:06 wxywb

(1, 18722) elements in idf: 18722

xxxfzxxx avatar Jun 27 '24 07:06 xxxfzxxx

Note that the sparse_vector schema is FieldSchema(name="sparse_vector", dtype=DataType.SPARSE_FLOAT_VECTOR)

sparse_index = {"index_type": "SPARSE_INVERTED_INDEX", "metric_type": "IP"}

xxxfzxxx avatar Jun 27 '24 07:06 xxxfzxxx

@xxxfzxxx Your sparse embeddings seem to have zero length. Using following code to verify this.

print(sparse_embeddings.toarray().shape)

I think it will be a 0-length sparse embedding. Then you need to verify your bm25 idf, by.

print('elements in idf:', len(bm25_ef.idf))

It shouldn't be empty if you have fitted your corpus.

my query's sparse embeddings are not zero length. it is actually a all zero csr matrix.

xxxfzxxx avatar Jun 27 '24 07:06 xxxfzxxx

Milvus's sparse embedding requires the number of non-zeros (nnz) in the sparse embedding(both the doc and the query) to be greater than 0. The users need to check the nnz of every row of the sparse embeddings before inserting/searching. When it equals zero, you need to fall back on dense retrieval.

sparse_embeddings.nnz # nnz of all rows of sparse_embeddings if sparse_embeddings contains multiple rows.
sparse_embeddings[0].nnz # nnz of the first row of sparse_embeddings.

The reason behind this is that as IP is the only available distance metric, an embedding with 0 non zero values will have a 0 IP distance to any other embeddings, thus a distance judgement cannot be made.

wxywb avatar Jun 27 '24 08:06 wxywb

It seems that for the BM25EmbeddingFunction, there is a risk of generating an all-zero query sparse embedding, which is not supported by Milvus.

wxywb avatar Jun 27 '24 08:06 wxywb

I saw that https://github.com/milvus-io/milvus-model/blob/main/milvus_model/sparse/bm25/bm25.py line 194 has a json file to download(https://github.com/milvus-io/pymilvus-assets/releases/download/v0.1-bm25v1/bm25_msmarco_v1.json). But I cannot find it anywhere. Can you provide a chinese version?

xxxfzxxx avatar Jul 01 '24 11:07 xxxfzxxx

It will download this file where you executed the code. Currently I only fitted the BM25EmbeddingFunction on MS MARCO dataset for English language. If you can fit it on your dataset, you will get better results. If you want a pretrained sparse embedding function for Chinese. I strongly recommend you to test this https://milvus.io/docs/embed-with-bgm-m3.md.

wxywb avatar Jul 01 '24 15:07 wxywb

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

stale[bot] avatar Aug 04 '24 08:08 stale[bot]

Can this PR solve the problem?

wangyiran33 avatar Sep 03 '24 02:09 wangyiran33

I think an empty sparse float vector is a good signal, usually it means your corpus didn't fit the training dataset at all. You should think of using another model like splade or m3

xiaofan-luan avatar Sep 04 '24 00:09 xiaofan-luan

I think an empty sparse float vector is a good signal, usually it means your corpus didn't fit the training dataset at all. You should think of using another model like splade or m3

I am using BM25, which has the advantage of low training costs. However, it inevitably leads to cases where user queries are not present in the corpus, resulting in empty sparse vector queries. In such cases of hybrid retrieval, sparse retrieval should return no results, while the hybrid result should be the result of dense retrieval, which may be better than throwing an error.

wangyiran33 avatar Sep 05 '24 09:09 wangyiran33

make sense to me @zhengbuqian what do you think?

xiaofan-luan avatar Sep 05 '24 23:09 xiaofan-luan