DeepFaceLab
DeepFaceLab copied to clipboard
Use pre-trained transformers
Hi, in documentation, there's some benchmark of using pre-trained transformers?
Basically, adding a pre-trained transformer as starting layer in the encoder, in a similar fashion of when you use BERT in texts. For example, a general one like beit from huggingface. Or even better, a custom pre-trained by you guys?
In the docs, what i do see is some already pre-trained models. Would you favor, train from that instead of the transformers? And again, do you have some benchmark to support that?
btw, very cool to have this lib as open source