ehion

Results 13 comments of ehion

> If you have enough gpu resouces to try, please train with the codes of branch 'bug/lr-scheduler', where I commit a fix : [834e651](https://github.com/KakaoBrain/fast-autoaugment/commit/834e65154a81b7d37a8b4a9ca95135a6d8922598) . > > Due to the...

> I trained Imagenet using 32 GPUs via horovod (8V100*4) but got acc 77.1% which was much less than 78.6% reported in your paper by running: > python train.py -c...

> @ehion Let me verify the code and get back to you (hopefully, next week). Waiting for it ,thanks

@JoinWei-PKU I haven't but i believe that the results in the paper is true becasuse i have implemented a sample based search methods different to the FastAugment(network inference only once...

> > @JoinWei-PKU I haven't but i believe that the results in the paper is true becasuse i have reproduced a sample based methods similar to the FastAugment(network inference only...

Don't understand sum function in you code,could you please explain it ,thx

``` def sum(self, layerin): layer1_size = self.res_size layer2_size = layerin[1] print (layer1_size[2], layer1_size[3]) print (layer2_size[2], layer2_size[3]) assert layer1_size[1] == layer2_size[1] assert layer1_size[2] == layer2_size[2] and layer1_size[3] == layer2_size[3] inp_size =...

``` mod = M.Model(img_holder, [None, 128, 128, 3]) mod.conv_layer(5, 96, activation=1) mod.maxpooling_layer(2, 2) #pool1 # a = mod.get_current_layer() mod.conv_layer(3, 96, activation=1) mod.conv_layer(3, 96, activation=1) # print (mod.get_shape()) mod.sum(a) ```

通过sentencepiece指定BPE分词来训练词表,训练完了会得到*.model的文件。原始llama词表是tokenizer.model。自己写代码通过m.pieces.append(xxx)来合并两个词表(去重),作者提供的Chinese_llama也有tokenizer.model文件,你可以自己打出来看一下就知道了,其实就是简单的append+去重(QAQ)

@cash-wei-plantern 能麻烦也共享一份吗,非常感谢。 [email protected]