keras-nlp icon indicating copy to clipboard operation
keras-nlp copied to clipboard

Add a byte pair encoding (BPE) tokenizer layer

Open mattdangerw opened this issue 3 years ago • 14 comments

We would like to add a BPE tokenizer (used by gpt-2, roberta and others). This ideally should be configurable to be compatible with the actual tokenization used by gpt-2 and roberta, and run inside a tensorflow graph.

mattdangerw avatar Mar 16 '22 18:03 mattdangerw

Nice! Should I draw up a rough implementation and share a Colab notebook?

abheesht17 avatar Mar 16 '22 18:03 abheesht17

This is a pretty important feature, as it will unlock some important models and is widely used.

However, there are some technical roadblocks here currently. We would like to keep our tokenizer running inside the tensorflow graph using tensorflow ops, and currently the tokenization ops are all provided by tf-text.

There is not a BPE tokenizer offered by tf text, but in theory SentencePiece should be configurable in a way that is compatible. See https://github.com/tensorflow/text/issues/763

The first thing to do would be to see if that is possible. Try configuring SentencePiece tokenizer for tf text and see if it can be configured to be actually compatible with the tokenizers for gpt-2 and roberta (testing against huggingface tokenizers would probably be the simplest to do this). A colab showing compatibility would "unblock" this work, and if it's not possible currently we may have to apply some fixes to tf-text and sentencepiece.

From there we could produce a design that would essentially hide the complexity of sentence piece under the hood. We would need to think about the vocab format we provide (a vocab and merges file?).

mattdangerw avatar Mar 16 '22 18:03 mattdangerw

@abheesht17 you are definitely welcome to help with this! This will require some diving into other libraries, to understand the support we have today.

mattdangerw avatar Mar 16 '22 19:03 mattdangerw

Great, will do 👍🏼

abheesht17 avatar Mar 17 '22 02:03 abheesht17

Hey, @mattdangerw. I went through this issue. So, essentially, this is what you want me to do:

  1. Use the SentencePiece library, and configure it so as to train a byte-level BPE tokeniser. Use a small text corpus for training.
  2. Use the .model file obtained after training and pass it to TensorFlow Text's SentencePiece tokeniser class.
  3. Now, use the same corpus and train Hugging Face's GPT-2 Tokeniser, and check whether the vocabulary obtained is similar, and check the output on a few input samples.

Is this correct?

abheesht17 avatar Mar 17 '22 16:03 abheesht17

I'm not sure we need to actually train a sentence piece model, though that might help understand things.

Basically, the public API we can rely on that might give us the op support we need is tf text's SentencepieceTokenizer, but that takes a sentence piece model proto as input.

End users will want probably want to use this layer with "vocab json" and "merges txt" files provided by official gpt/roberta githubs or huggingface. We can keep thinking about the file format we would want, but asking end users to construct a sentence piece model is probably a non-starter.

So, the question we could try to answer is can we manually construct a sentence piece model proto from gpt vocab and merge files in a way that's compatible. If so, we could build this layer on top of the existing tf text API, and not rule out more direct support from tf text in the future. If not, we will need to go back to the drawing board a little bit and figure out how to get op level support here.

So putting that into a list:

  1. Start with a vocab and merges files for say gpt2.
  2. Generate some correct output for some sample text (probably easiest to use huggingface here? could also try using the tokenizer impl from gpt2 github)
  3. Try building a tf text SentencepieceTokenizer from those files that matches the real tokenizer output.

It may turn out we are more blocked here than we think from https://github.com/tensorflow/text/issues/763, but this would be the way to find out.

mattdangerw avatar Mar 18 '22 01:03 mattdangerw

Ah, understood. Thanks for clarifying!

abheesht17 avatar Mar 18 '22 05:03 abheesht17

Some useful articles about how Hugging Face tokenises the input text (given vocab.txt and merges.txt):

https://github.com/huggingface/transformers/issues/1083#issuecomment-524303077 https://github.com/huggingface/transformers/issues/4777

  1. Tokenise text using merges.txt
  2. Map the tokens to indices using vocab.json

abheesht17 avatar Mar 20 '22 07:03 abheesht17

Hey, @mattdangerw. Sorry for the delay, forgot about it. I opened an issue on the SentencePiece repository: https://github.com/google/sentencepiece/issues/739. The author of the repo mentions this: "manual model modification/creation is totally unsupported."

However, looks like we may be able to add tokens from the vocab to the pieces attribute. I don't think they have Python wrappers/APIs for adding "pieces". However, they do have a function in C++, AddPieces. See this unit test: https://github.com/google/sentencepiece/blob/bc53923a9147dc8ffa54034c8ed774de78cc4d39/src/bpe_model_test.cc#L52. I'll try to use this function, and reproduce the output we get using HF. Give me a day or two.

abheesht17 avatar Mar 30 '22 04:03 abheesht17

Hi All,

Just curious if anyone has found any sort of work around for this issue. My conclusion after reading related issues is that its not currently possible to incorporate popular BPE tokenizers (roberta/GPT2) within tensorflow-text pipelines?

aleemkhan62 avatar Jul 03 '22 16:07 aleemkhan62

@aleemkhan62 Currently you can use BPE via tf_text.SentencePieceTokenizer only if you have a pretrained model proto. We are looking into a better solution on it! please stay tuned, thanks!

chenmoneygithub avatar Jul 09 '22 18:07 chenmoneygithub

To add a little more color for others finding this issue, you can train a BPE-style vocabulary with sentecepiece today, and a sentencpiece model can be used with tensorflow text, or the SentencePieceTokenizer in this library. However than might not have the exact behavior as roberta/gpt2 tokenization.

We are currently working on a way to support the actual vocabulary files used by roberta/gpt2 (merges.txt and vocab.json), with exactly equivalent tokenization, running inside the tf graph.

mattdangerw avatar Jul 19 '22 17:07 mattdangerw

Any updates here?

piEsposito avatar Oct 14 '22 16:10 piEsposito

Any updates here?

https://github.com/keras-team/keras-nlp/pull/303

abheesht17 avatar Oct 14 '22 17:10 abheesht17

Closing this! We have an implementation released -> https://keras.io/api/keras_nlp/tokenizers/byte_pair_tokenizer/

If anyone encounters issue with the tokenizer, please file a bug!

mattdangerw avatar Jan 07 '23 00:01 mattdangerw