shoutOutYangJie
shoutOutYangJie
You can find the code here, https://github.com/shoutOutYangJie/EG3D-pytorch The result is not better than the paper. but I think that you can use it as a reference.
 wish to your reply.
 Like this. Each row is generated by different noise z, and each column is generated by the same camera paramters. You can find, each row contains different IDs, although...
In your paper, you use feature add-averger to merge multiple source images,but at test code, you just use averaging the probability maps. It is strange.
Thanks for your great work, I want to use your trained deepmask model. can you upload it, if you are convient. thank you!
 Hi, can you predict this image. please. The mask can be obtained by the following code ``` def get_mask(path): m = cv2.imread(path) new_mask = np.zeros(shape=m.shape, dtype=np.uint8) m = np.mean(m,...
I notice your aflow has two steps during training. At 'raw data' step, you use 'stage1' as second stage. But I observe the hyper parameters at "stage1" is the same...