Retrieval_Head
Retrieval_Head copied to clipboard
Question about using model "llama-2-7b-80k"
https://github.com/nightdessert/Retrieval_Head/blob/3ac171a6f71ce7ef1cda57d4215c390fb6ab51f2/retrieval_head_detection.py#L188
For "llama-2-7b-80k" model, why we need to reset the rope parameters? It seems that the config.json have included the scale_factor.