Allen Lee

Results 7 comments of Allen Lee

HI, @staticgsm Could you check if the code below works well? There seems to be no data `qa_train` variable. ``` qa_train = QuestionAnswerDataset(train, tokenizer, negative_sampling=True) ```

Hi @liuyaqiao In general, increasing batch size will increase GPU usage, but it can also affect training. If it has a good effect on training, I'd be thankful if you...

@JiyangZhang, Please let me know which task you are talking about.

@davidniki02 I wrote simple sample code to know how to save/load the model. After loading the model, you can predict/do classification as you did. ``` # save model architecture model_json...

@davidniki02 It depends on what kind of tokenizer you use. For example, if you use nltk.mosestokenizer (in nltk.tokenize.moses), you don't need to save/load the saved tokenizer. Just call the function,...

Sorry for the delay in replying, The reason why I used 2 tokenizers(_MosesTokenizer_, _Keras tokenizer_) is: - The actual tokenization is only preformed by _MosesTokenizer_ in _tokenization.py_. _Keras tokenizer_ in...

@davidniki02 , In your code, tokenizer is initialized/fit every time. But, `fit_on_texts` function should be called for all corpus that you have. (reference link: http://faroit.com/keras-docs/1.2.2/preprocessing/text/) - fit_on_texts(texts): - Arguments: -...