Shuai Yang
Shuai Yang
I guess you use the VToonify-T model, right? Since my training is to use the synthetic data from Toonify (your stylegan2 model), which only covers the face regions and parts...
I think you can set batch_size to a small value like 1 to see if the memory is still not enough. `--batch_size 1` https://github.com/williamyang1991/VToonify/blob/920b56478835873169b31dd3d134d29e7e16f94b/style_transfer.py#L35
I have also enountered this issue previously. This happens when there is no face detected in your video. I think this is a bug of the dlib.shape_predictor. Most of time,...
You are welcome. > I have also enountered this issue previously. This happens when there is no face detected in your video. I think this is a bug of the...
Yes, quite weird. This week I have run many results with the code on my local environment without this issue. I only got the error on Colab.
I'm very confused. This face detection code is directly copied from DualStyleGAN, where no one report this issue. https://github.com/williamyang1991/DualStyleGAN/blob/25f86a445362dd7bdf2ad4391afbd9dca162e9c1/model/encoder/align_all_parallel.py#L32-L49
## Use `--padding` to prevent cropping.  ## Example: ```python python style_transfer.py --content ./data/038648.jpg \ --scale_image --backbone toonify \ --ckpt ./checkpoint/vtoonify_t_arcane/vtoonify.pt \ --padding 600 600 600 600 # use large...
I haven't tried this before. Our content features should be applied to multiple faces at a time, but our style code is extracted from only one face. So I don't...
We have tried the style of Anime, but the results are not satisfactory. For those styles that are far from the real faces, the correspondence beween the inputs and the...
You only need to train the corresponding encoder to match the DualStyleGAN using the follwoing two codes: https://github.com/williamyang1991/VToonify#train-vtoonify-d ``` # for pre-training the encoder python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT train_vtoonify_d.py...