Xin Zhang

Results 37 comments of Xin Zhang

Do you means that if I want to SR 4x, I need to SR 2x first and SR 2x again? emmmmm, that's so simple and direct hhhhh. I tried to...

Can we run tango with only `torch` and `transformers`? I installed tango via `pip install ai2-tango[transformers]`. But ``` Tango version 1.0.0 (python 3.8.13) Integrations: ✓ torch [10/09/22 15:25:15] ERROR Uncaught...

> any particular reason why just these numbers? great work by the way! Thx! Except for `4, 128, 1024`, the rest are randomly sampled from `1e0 - 1e8`. I think...

[Nandan Thakur](https://github.com/thakur-nandan) may have started https://github.com/embeddings-benchmark/mteb/issues/198#issuecomment-2050189570 on Miracl Merging [Mr.TyDI](https://github.com/embeddings-benchmark/mteb/blob/main/mteb/tasks/Retrieval/kor/KoMrtydi.py) seems like a good idea. I think we would prefer not to run these massive retrieval tasks too many times...

> > I think we would prefer not to run these massive retrieval tasks too many times... > > If the dataset is massive we might consider reducing the size...

@orionw No, we do not have some threshold to the corpus size. It's just a random thought of mine. I apologize if I haven't made myself clear. What I intend...

Hmm, yeah, it maybe about the inference batch_size. I find that i did the wrong math, sorry. In `RerankingEvaluator.compute_metrics_batched()` ([line 84](https://github.com/embeddings-benchmark/mteb/blob/main/mteb/evaluation/evaluators/RerankingEvaluator.py#L84) & [94](https://github.com/embeddings-benchmark/mteb/blob/main/mteb/evaluation/evaluators/RerankingEvaluator.py#L94)), [`SentenceTransformer.encode`](https://github.com/UKPLab/sentence-transformers/blob/master/sentence_transformers/SentenceTransformer.py#L195) is called with `convert_to_tensor=True`. My embedding...