Weitang Liu

Results 65 comments of Weitang Liu

@woyijkl1 finetuning 语言模型吗?先prepare_lm_data_ngram.py生成数据,然后加载模型进行run_pretraining.py,在214行: ```python if args.model_path: model = AlbertForPreTraining.from_pretrained(args.model_path) ``` 具体的你直接看代码,以根据你自己的数据进行简单调整下就行了

@ChineseYjh 类似BERT的用法, 你只需要把模型文件替换下,加载对应的预训练模型即可.

@Porcupine96 Adding `tf_path = tf_path + "/variables/variables"` is for converting the weight of `th_hub` to the weight of `pytorch`.

@yzgdjqwh 博文里面有介绍,对应可以找到代码https://lonepatient.top/2019/10/20/ALBERT.html

@yzgdjqwh 在prepare_lm_data_ngram.py里面

@DunZhang 你结果相差多少呢? 我一般都是使用16或者32batch进行实验,还没出现过你说的差距

@DeepakDhana SOP code: ```python if random.random() < 0.5: # 交换一下tokens_a和tokens_b is_random_next = True temp = tokens_a tokens_a = tokens_b tokens_b = temp else: is_random_next = False ```

@YuxiangLu 你的意思是说当step 小于gradient_accumulation_steps时,简单看了样,训练跟eval应该都同时进行了,应该时: ```python if args.local_rank in [-1, 0] and args.logging_steps > 0 and global_step % args.logging_steps == 0: #Log metrics if args.local_rank == -1: # Only evaluate when single...

@xiao7462 hi,I updated README.md,now you can download dataset.

@hi-wangyan hi, run ` python train_word2vec.py`