yeeyang
yeeyang
code from line 112, models/bicycle_gan_model.py `self.fake_B_random = self.netG(self.real_A_encoded, self.z_random) ` Why do we generate the fake_B_random using real_A_encoded instead of real_A_random? I know that it does not really matter since...
[`mmdeploy/csrc/mmdeploy/codebase/mmpose/topdown_affine.cpp` ](https://github.com/open-mmlab/mmdeploy/blob/master/csrc/mmdeploy/codebase/mmpose/topdown_affine.cpp) I am trying to modify the above file to allow preprocessing on cuda. The model I am using is [HRNet](https://github.com/open-mmlab/mmpose/blob/master/configs/wholebody/2d_kpt_sview_rgb_img/topdown_heatmap/coco-wholebody/hrnet_w48_coco_wholebody_256x192.py). TopDownAffine uses cv2.warpaffine, but at inference it only...