GPT2
GPT2 copied to clipboard
about encoder.json
How can I get encoder.json on my own dataset? I am comfused about it. I got a vocab file using SentencePiece.
This repo is pretty sparse and I don't have any plans currently on working on it more, so I don't have any kind of fancy support for custom encoders. If your encoder.json is in the same format as the one used by OpenAI, you can simply drop it in and train your model from scratch using it. You will have to encode your dataset using the new encoder.json, of course. You might also have to change the vocabulary size of the model, if yours is different from the default. I don't know if SentencePiece has the same format as what OpenAI uses unfortunately. I think its BPE encoding setting might, but I'm not sure.
To explain roughly what's going on: The encoder.json gives pieces of input text unique numbers, basically a dictionary matching symbols/words to numbers. The model is trained on text encoded into those numbers, and spits out such numbers again which you translate back into text. The vocabulary size parameter of the model is just how many word to number entries there are in your encoder.json. You will have to retrain the model from scratch if you use a different encoder.json.
@fnyhy did you manage to build an encoder.json for your own dataset? If so how? :) Thanks a lot in advance!
@mananeau No,I failed again. I generated the encoder.json and vocab file using my dataset. But it reported error when the model used them in encode-decode process:(
@fnyhy thanks for you reply and sorry to hear it didn't work. Did you have a look at this repo? Might be another way to train GPT-2 from scratch on your own data. For vocab, they seem to use a similar one as BERT (WordPiece).