Liang Ding

Results 19 comments of Liang Ding

I also improved the code to make it compatible with PyTorch 1.1 while allowing multi-GPU training on both RNN and CNN experiments.~ you can refer: https://github.com/alphadl/darts.pytorch1.1

I improved the code to make it compatible with PyTorch 1.1 while allowing multi-GPU training on both RNN and CNN experiments.~ you can refer: https://github.com/alphadl/darts.pytorch1.1

I improved the code to make it compatible with PyTorch 1.1 while allowing multi-GPU training on both RNN and CNN experiments.~ you can refer: https://github.com/alphadl/darts.pytorch1.1

I improved the code to make it compatible with PyTorch 1.1 while allowing multi-GPU training on both RNN and CNN experiments.~ you can refer: https://github.com/alphadl/darts.pytorch1.1

I improved the code to make it compatible with PyTorch 1.1 while allowing multi-GPU training on both RNN and CNN experiments.~ you can refer: https://github.com/alphadl/darts.pytorch1.1

Personally, 4.9M means the number of 4.9 million learnable parameters rather than storage.

Hi~ Maybe , you need to configure the parameter `"-model_config"` by yourself to achieve the setting as GNMT

@CrystalWLH Because this repository is an earlier Transformer version, the original version is implemented with Tensorflow, see [Kyubyong/transformer](https://github.com/Kyubyong/transformer/tree/85e2dd95c99993c1e6e4193a0493fef5832e4bc1/tf1.2_legacy)