stevewyl
stevewyl
I changed the embedding file to glove.6B.50d and increased the batch size. However, I still got out of GPU memory OOM error, how can I fix this problem on device...
@cahya-wirawan Thanks for your hints! Finally, I successfully run the code on GTX1080 8GB with small batch size and a small percentage of dev_samples. So I'd like to ask that...
@cahya-wirawan In NB and SVM experiment, I found that you used all the 20newsgroups data, then I tried to use the same data as you used in CNN experiment, the...
@cahya-wirawan You're right! I tried 6 categories classification and found that CNN with word embeddings start to outperform SVM. Thanks for your immediate reply! : )
I'm also trying to export the unfine-tuned BERT model as an online service. I followed the official instructions [SavedModel](https://www.tensorflow.org/guide/saved_model#using_savedmodel_with_estimators) and successfully exported the fine-tuned model. But when I try to...
@apurvaasf Hi! Is there any possibility that just export the original bert model as SavedModel? Or can we do a fake training process to generate 'checkpoint' file, which is missed...
@apurvaasf I found an easiest way to export original bert model to SavedModel. ```python # load the checkpoint from bert # create an estimator which contains original bert model and...
那感觉并没有达成轻量化的目标啊,只是模型参数以FP16精度保存,模型大小变小了
不知道字典的存放位置是否影响? 我输入如下代码,结果还是没有返回我想要的结果,字典文件中已添加我想要的词  thu1 = thulac.thulac(user_dict="D:/python/text_preprocessing/dict.txt") thu1.cut('我爱深度学习和机器学习', text=True) Out[14]: '我_r 爱_v 深度_n 学习_v 和_c 机器_n 学习_v' 不知道哪里出错了?= =
I also encountered this situation. But I am using CNN to do sequence labeling,so I cannot set mask_zero=True in Keras. When using RNN,the CRF loss is about 5-6, but when...