llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

YaRN : correction to GPT-NeoX implementation

Open cebtenzzre opened this issue 7 months ago • 4 comments

At one point I was struggling to understand what the Metal kernel was doing with GPT-NeoX RoPE, and I think I got it wrong. I got halfway there - the comment makes it fairly obvious what is going on. But the rotation amount should be an integer and should not be multiplied by inv_ndims - inv_ndims should only be part of theta.

@jquesnelle does this seem like the right thing to do?

I learned from my mistakes, this is running on ggml-ci so I don't have to worry about error-prone manual testing across several machines.

cebtenzzre avatar Nov 15 '23 22:11 cebtenzzre