WILL LEE

Results 42 comments of WILL LEE

@782832949 hello, 我这样回答你的问题吧. 如果你有看AlphaGo Zero 的paper的话, 它们有两个实验,一个是20个blocks(40层conv),一个是40个blocks. 两个Net, 结论是后者更好. 那么,结合ResNet的特性.你加深Net是会带来好处的. 在这里就是model能学到更精准的估计. 来看这个五子棋, 如果按 @junxiaosong 哥的8x8棋盘,原本的net我觉得实际上太浅了.是可以加深的. 那么如果你再加大棋盘的话,我建议是加深层数. PS: song哥的Net是conv堆积, 建议改成resnet.再考虑加深层数. 最后结论就是你想加大棋盘的话, 建议使用residual block的方式来加深 network. 当然,计算量就会大大提高,需要使用更好的GPU资源来训练 :smile: have a nice try!

引用田博士一句话给你好了: ``` 我们最近改进了ELF框架,并且在上面实现了DeepMind的AlphaGoZero及AlphaZero的算法。 用两千块GPU训练约两到三周后得到的围棋AI,基本上超过了强职业的水平。 ``` 很肯定是训练的时候需要很多GPU,具体字眼 **两千块GPU训练约两到三周** 而使用训练好的模型只需要1块。 你甚至可以用一张 GTX 1050Ti 来很轻松地运行ELF OpenGo。 所以当然是 **跑训练好的模型用1块**

According to [issue884](https://github.com/caffe2/caffe2/issues/884) I solved it by using the following code ```python from caffe2.python.predictor import mobile_exporter def save_net(INIT_NET, PREDICT_NET, model) : extra_params = [] extra_blobs = [] for blob in...

```bash python main.py -a resnet50 \ --dist-url 'tcp://127.0.0.1:12306' \ --dist-backend 'nccl' \ --multiprocessing-distributed \ --world-size 1 --rank 0 \ --batch-size 1024 \ --evaluate \ --pretrained /data/ILSVRC ``` ## Result(OLD): ```bash...

You can try ImageNet training example [[imagenet.py](https://github.com/BIGBALLON/distribuuuu/blob/master/tutorial/imagenet.py)] ### More Please check [tutorial](https://github.com/BIGBALLON/distribuuuu/blob/master/tutorial) for detailed Distributed Training tutorials: - Single Node Single GPU Card Training [[snsc.py](https://github.com/BIGBALLON/distribuuuu/blob/master/tutorial/snsc.py)] - Single Node Multi-GPU Cards...

I'm Chinese and I have the same question !

this is my training results: ``` airplane, 100, 1, 50, 1 Avg Run Time (ms/batch): 4.985 AUC: 0.975 max AUC: 0.975 Avg Run Time (ms/batch): 4.649 AUC: 0.989 max AUC:...

@tiandamiao @samet-akcay is there anything wrong? can you give me some help?

@davids-zhou and I shuffle the test data, then the result is low.