kiui
kiui
@alex3dfan Hi, yes, you have to implement the `backward` for raymarching, so the gradient from `xyzs` and `dirs` can propagate to `rays_o` and `rays_d`, which finally get to your trainable...
@brabbitdousha Hi, you are right. In that case, `rdx` will be [infinity](https://forums.developer.nvidia.com/t/divide-by-zero-handling/11644/4), and ruled out in the later `fminf`, so the behaviour is still correct. But the best way is...
@fotfotfive @Tejas-Deo Could you try to reinstall the latest raymarching: `pip install ./raymarching` and try again?
@yongsiang-fb Hi, they are quite different in implementation (e.g., the occupancy grid). You may increase `num_steps` and `upsample_steps` for non cuda ray mode for better quality.
@chl2 Hi, you could modify the `provider.py` and load different camera intrinsics for each image.
@wacyfdyy Sorry that this is not implemented. Maybe you would like to check [ngp_pl](https://github.com/kwea123/ngp_pl), which supports parallel training.
@oculardegen We already have this mesh exportation by default? https://github.com/ashawkey/torch-ngp/blob/main/main_nerf.py#L160
@shangchengPKU Hi, it seems the image resolution is incorrect for your dataset (which is assumed to be 800x800 [here](https://github.com/apchenstu/TensoRF/blob/main/dataLoader/blender.py#L20)). You could modify the codebase to adapt to your settings.
@ruanjiyang Hi, 1. It seems the eyes are not well learned. In this case, you could try to fix the eye movement using `--fix_eye 0.25`. 2. The lips sync for...
This is caused by too many chinese character classes. I'm afraid this will be too large for the MLP to work well, but you could try. In fact, character label...