DeepSeek-Coder-V2
DeepSeek-Coder-V2 copied to clipboard
ollama Model is configured wrong: Double BOS
Whenever I run deepseek-coder-v2:latest through ollama, the following Error pops up in the log for each prompt:
llm_tokenizer_bpe::check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
I believe that this is due to tokenizer.ggml.add_bos_token being set to true but the template also already having a <|begin▁of▁sentence|> token.
I'm not sure how this affects hallucinations of the model
same issue for me, any update?
same issue for me