ollama
                                
                                 ollama copied to clipboard
                                
                                    ollama copied to clipboard
                            
                            
                            
                        Llama3 Tokenizer
What is the issue?
I requanted the llama3 Sauerkraut with the newest release of llama cpp which should have fixed the tokenizer, but when I load the model into Ollama, I still get the wrong output while people using llama cpp get the right one. So I'd say that there is still something buggy in ollama. Here is the Output. "What is 7777 + 3333? Let me calculate that for you!
77,777 (first number) + 33,333 (second number) = 111,110
So the answer is 111,110!"
OS
Linux
GPU
Nvidia
CPU
AMD
Ollama version
0.1.32
Yeah, same here. Freshly converted GGUFs of both original L3 Instruct and various finetunes that give proper answer to this question in llama.cpp (b2776) or koboldcpp (1.64) fail when imported into ollama (0.1.32).
Duplicate? https://github.com/ollama/ollama/issues/4026
@coder543 Yeah, seems to be the same thing.
Oh yeah duplicate, Im gonna close this then.