everybody_dance_now_pytorch icon indicating copy to clipboard operation
everybody_dance_now_pytorch copied to clipboard

A PyTorch Implementation of "Everybody Dance Now" from Berkeley AI lab.

Results 9 everybody_dance_now_pytorch issues
Sort by recently updated
recently updated
newest added

Hello, I seem to have found a problem when trying to train hign-resolution video. After I resized the normal video at 512*512 resolution to 1024*1024, I found that some rectangular...

Thank you for your work! I have downloaded a Bruno Mars video and resize it to 288*512, but I can't correctly get the coordinates with ./pose_estimator and I found there...

There is a typo in compute_coordinates_for_video line 249. It should read: if not os.path.isdir(pose_dir):

I am running into this syntax error when running the compute_coordinates_for_video.py ![image](https://user-images.githubusercontent.com/13815814/89264870-3c021880-d666-11ea-91ea-17d4b19ac338.png) My datasets/train_B just contains a bunch of images extracted from video ![image](https://user-images.githubusercontent.com/13815814/89265062-84b9d180-d666-11ea-8dce-59cef61b1ac8.png) Please kindly let me know why...

```torch.utils.serialization.load_lua``` no longer supported so I have fixed it by including https://github.com/bshillingford/python-torchfile Now its working fine with latest Pytorch versions

I got the error when runing `python ./pose_estimator/compute_coordinates_for_video.py`: `2020-09-02 08:55:35.129558: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory...

I execute the script sh ./scripts/test_full_512.sh Then appear the error as follows: /media/ouc/4T_B/gc/everybody_dance_now_pytorch/models/pose2vidHD_model.py:146: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead. input_label = Variable(input_label, volatile=infer)...

thank you for the work,can you give your result video demo,I want to compared it to vid2vid,because I use vid2vid to pose transefer.