Xingang Pan
Xingang Pan
@forkbabu Thanks :-)
@30qwq This is a little bit odd. Have you revised the code? I notice several differences with my results: 1) The viewpoint changes of pseudo images are too large. I...
@30qwq It is normal that pseudo images look like 3D objects, as they are rendered via 3D mesh renderer. The 3D effects look more obvious in your case because the...
@30qwq I notice 2 possible reasons: 1) In my original code, the backgrounds of pseudo samples (`im_rotate2`) should be gray, as the `grid_sample` mode would pad zero in the background....
@yzliu567 Hi, you need to revise the `rand_light: [-0.9,0.9,-0.3,0.8,-0.1,0.7,-0.4]` parameter in the config file. The corresponding light sampling code is at `gan2shape/model.py` Line 752-762. This is also described in the...
@30qwq Our approach by design is an online approach, i.e., it trains on each testing image. You just need to perform GAN-inversion to get the latent code, and then apply...
@JiuTongBro We use an off-the-shelf scene parsing model PSPNet to parse the foreground and background. The code is at the `parse_mask` function in `gan2shape/model.py` (line 386).
@RainCoderJoe Hi, you can use the method in this repo https://github.com/elliottwu/unsup3d to estimate the mean and covariance of viewpoint and lighting variations.
Yes, that's what I mean. Besides, you may also try using the provided view_mvn files and see if they work. Usually, the view_mvn does not have to be very accurate.
@terrybelinda Hi, thanks for your interest. I evaluate on the Synface dataset from https://github.com/elliottwu/unsup3d, which have ground truth depth. Apart from this, I think the ShapeNet (https://shapenet.org/) dataset also has...