rank_llm
rank_llm copied to clipboard
Repository for prompt-decoding using LLMs (GPT3.5, GPT4, Vicuna, and Zephyr)
For example: maybe in a oneline function that could be used both for LRL and rankGPT prompts? something like: num_words = (context size - num output tokens(current window size) )*...
Adding support to Llama.cpp with quantized models. 8-bit model: https://huggingface.co/castorini/rank_vicuna_7b_v1_q8_0/ 4-bit model: https://huggingface.co/castorini/rank_vicuna_7b_v1_q4_0/
# Summary of changes - bug fix in `extract_kwargs`
I tried to install rank_llm using `pip install rank_llm` in my AWS workspace(windows), but it's throwing below error, tried after installing 'wheel' and 'nmslib ' still same error. ``` Building...
# Pull Request Checklist ## Reference Issue Please provide the reference to issue this PR is addressing (# followed by the issue number). If there is no associated issue, write...
Hey, thanks for providing the wonderful library. I am using the GPT-4o model as a Ranker. I use BM25 as the initial retriever (take top 100 results). My code breaks...
# Pull Request Checklist ## Reference Issue This is a superset of issue [Top Down](https://github.com/castorini/ura-projects/issues/46). This PR reorganized the various reordering methods including sliding window, top down, as well as...
## Summary Prompt abstraction: - Renamed PromptMode to `Prompt` enum - Added `prefix()` and `suffix()` methods for each enum, where the prompt strings can be found
# Summary of changes - Introduce Pairwise Rerankers with the pairwise_rankllm ABC. - `DuoT5` concrete pairwise reranker @IR3KT4FUNZ
I wanted to see how you implemented the loss function and back-propogated against it, but i cant find it in the code. Is the training code available to look at