llama.cpp
llama.cpp copied to clipboard
YaRN : correction to GPT-NeoX implementation
At one point I was struggling to understand what the Metal kernel was doing with GPT-NeoX RoPE, and I think I got it wrong. I got halfway there - the comment makes it fairly obvious what is going on. But the rotation amount should be an integer and should not be multiplied by inv_ndims - inv_ndims should only be part of theta.
@jquesnelle does this seem like the right thing to do?
I learned from my mistakes, this is running on ggml-ci so I don't have to worry about error-prone manual testing across several machines.