tokenizers icon indicating copy to clipboard operation
tokenizers copied to clipboard

Decoding Issue for Latin Characters in `added_tokens`

Open 44670 opened this issue 2 years ago • 1 comments

Hello,

I'm encountering a decoding issue in the tokenizers library, particularly with some latin characters included in the added_tokens. This issue is observed when using the DeepSeek-coder model, which has the following token definition:

"added_tokens": [
  {
    "id": 32000,
    "content": "õ",
    "single_word": false,
    "lstrip": false,
    "rstrip": false,
    "normalized": true,
    "special": false
  }......
]

While encoding this character works as expected, the decoding process does not produce the correct result. Here's an example illustrating the issue:

tok.encode('õ', add_special_tokens=False)
# Output: [32000] // This is correct

tok.decode([32000])
# Output: '�' // This is incorrect

The decoding of the token ID 32000 should return 'õ', but instead, it returns an incorrect character. This issue seems to be specific to the decoding process.

Could you please investigate this problem? Any assistance in resolving this would be greatly appreciated.

Thank you for your help.

44670 avatar Jan 04 '24 03:01 44670

Hi @44670, thanks for your interests in DeepSeek models. The problem can be explained in #1392. This issue cannot be resolved for the time being. We will update our tokenizer in the subsequent model releases.

DOGEwbx avatar Jan 22 '24 05:01 DOGEwbx

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Feb 22 '24 01:02 github-actions[bot]