rank_llm icon indicating copy to clipboard operation
rank_llm copied to clipboard

Repository for prompt-decoding using LLMs (GPT3.5, GPT4, Vicuna, and Zephyr)

Results 31 rank_llm issues
Sort by recently updated
recently updated
newest added

# Pull Request Checklist ## Reference Issue Please provide the reference to issue this PR is addressing (# followed by the issue number). If there is no associated issue, write...

Currently, we only support inference with a (single query, single subset of documents), but technically we could batch over the query dimension pretty easily, doing it over document subsets is...

enhancement

Provide *important* cached retrieve results as well as rerank results hosted elsewhere but documented here. I can perhaps do this sometime.

enhancement
help appreciated :D

2cr pages for RankZephyr/Vicuna on MSv1/v2 to begin with like Pyserini - https://castorini.github.io/pyserini/2cr/msmarco-v1-passage.html

documentation
good first issue

# Pull Request Checklist ## Reference Issue Please provide the reference to issue this PR is addressing (# followed by the issue number). If there is no associated issue, write...

It's very easy to add setwise support with our models, maybe an easy add for @jasper-xian after conference deadlines!

Currently I think we need the exact top-$k$ file cached, but if you say, have top-100 file cached you shouldn't redo retrieval for top-20 reranking, this is an unnecessary step.

enhancement
good first issue

Currently we add the following snippet to every script that we want to run both from a cloned repo and using package installation. Ideally we should find a cleaner/simpler way...

Can we always say that [1] > ... > [20] is always the same number of tokens as some random ordering? My hunch is yes, and I sure hope so...