Haowei Zhu
Haowei Zhu
@hjxwhy A1: Maybe your hypothesis is right. I noticed that self-occlusion have slightly shift on different frames, which means it is hard to pre-define a accurate self occlusion mask. Images...
Updates: 1、I tried spatial-wise constraint start from a pre-trained model without the spatio-temporal constraints. It indeed better than w/o pretrained. However, it still worse than baseline model. Besides, I am...
@abing222 No. Only Self-oclussion mask work. STC and Pose consistency loss does not work.
> > @abing222 No. Only Self-oclussion mask work. STC and Pose consistency loss does not work. > > At present, I can obtain the absolute scale through spatio, the accuracy...
> > > @abing222 No. Only Self-oclussion mask work. STC and Pose consistency loss does not work. > > > > > > At present, I can obtain the absolute...
> Thanks for the reply. > > I checkout to `9617e65ad351558636de5586a48db848eab578c6` and with a few modifications, I'm able to run the `train.py`. @jiaqixuac Hi jiaqi, I meet the same problem...
> Hi @LionRoarRoar , are you able to install it? I just followed Dockerfile to install this package. After that, I check out to 9617e65ad351558636de5586a48db848eab578c6 Thank you. It works now....
> How are you coloring the depth maps for visualization? Thanks for your reply. The visualize code is similar as https://github.com/TRI-ML/DDAD/blob/master/notebooks/DDAD.ipynb For resize function, I also tried the "resize_depth_preserve" function...
> > Hi, > > Could you advise how to compute the extrinsics between the cameras? I tried this way: T_cam1tocam5 = np.linalg.inv(data_cam5["extrinsics"]) @ data_cam1["extrinsics"]. Is this the right way?...
> Sorry I did not explain that the ubuntu_train_vpt_vtab.sh is a file equals to slurm_train_vpt_vtab.sh. > > I tried to set lr=1e-2, weight_decay=1e-4, the result improved to 71.21. If changing...