BERT-pytorch icon indicating copy to clipboard operation
BERT-pytorch copied to clipboard

Is it possible to train BERT?

Open codertimo opened this issue 5 years ago • 7 comments

Is it possible to achieve the same result as the paper in short time? Well.. I don't have enough GPU & computation power to see the enough result as google ai.

If we can't train the full corpus as the google, then how can we prove that this code is verified? Training 256M size corpus without Google AI class gpu computation is nearly, impossible for me.

If you have any thought(reducing the model size) please let me know!

codertimo avatar Oct 17 '18 13:10 codertimo

The authors plan on releasing the full pre-trained model in a few weeks. There will be the task of loading their model weights into PyTorch. Perhaps ONNX will work for getting the weights out of TF and into PT?

Once the weights have been loaded, it should be possible to validate the finetuneing results.

briandw avatar Oct 22 '18 17:10 briandw

@briandw Well I sent the email to author, and they noticed me the same thing. Well I agree that we can generate the pytorch module using ONNX, but it might be impossible to load weight on this model as same as tf model architecture. So do you have any idea about this?

codertimo avatar Oct 23 '18 01:10 codertimo

I can try to import the Tensor2tensor model into PT. https://github.com/tensorflow/tensor2tensor It should be the same process.

briandw avatar Oct 23 '18 17:10 briandw

@codertimo Should the goal be to train BERT from scratch or to fine-tune the model? I'd say that scratch training isn't realistic right now. Fine-tuneing shouldn't be that resource intense and will be very valuable.

briandw avatar Oct 23 '18 17:10 briandw

@briandw Thank you for your advice. Currently my goal is training from the scratch with smaller model which can available to train on our GPU environment. Cause I wanna keep this implementation for someone need training on there specific domain or language.

But as you said, moving trained model on tf to pytorch is another goal of this project too. So I liked to implement the transfer code for loading pretrained model too. Well I'll make a plan and notice you guys when the pretrained model and official BERT implementation is came out.

codertimo avatar Oct 24 '18 01:10 codertimo

Is this code support distributed training? I mean multi-computer with multi-gpu...

jacobrxz avatar Dec 19 '18 09:12 jacobrxz

@codertimo Did you already trained this model on small dataset ? If yes, would you share some info about it ? For example, what if we use p2.8xlarge GPUs to train on 1M dataset from scratch (Thanks for wonderful work BTW)

BerenLuthien avatar Feb 19 '19 06:02 BerenLuthien