DeepFaceLab icon indicating copy to clipboard operation
DeepFaceLab copied to clipboard

Use pre-trained transformers

Open iuiu34 opened this issue 2 years ago • 0 comments

Hi, in documentation, there's some benchmark of using pre-trained transformers?

Basically, adding a pre-trained transformer as starting layer in the encoder, in a similar fashion of when you use BERT in texts. For example, a general one like beit from huggingface. Or even better, a custom pre-trained by you guys?

In the docs, what i do see is some already pre-trained models. Would you favor, train from that instead of the transformers? And again, do you have some benchmark to support that?

btw, very cool to have this lib as open source

iuiu34 avatar Feb 14 '23 08:02 iuiu34