Ilya Basharov
Ilya Basharov
@AdamRashid96 did you plan to realize spatial continuity regularization from SparseNeRF?
@AdamRashid96 also I have found a bug that `torch.nanmean` useless in depth-ranking loss. if mask will be all `False`, `torch.nanmean` will give `nan` value.
Hello, @programmeddeath1 ! Did you solve this problem?
@jmwang0117 did you solve this problem? How to find this depth scaling parameter?
> can i apply segmentation in demo.py ? No, u cannot :( There are errors when just load model with --seg flag. I don't know what to do too.
@Kayzwer Hello, thx for your effort! What do you think, what is better to use - [onnx-simplifier](https://github.com/daquexian/onnx-simplifier.git) or [onnx-slim](https://github.com/tsingmicro-toolchain/OnnxSlim.git)?
@SuLvXiangXin do you mean this? 4 page of MipNeRF paper 
In Zip NeRF authors said that  So in your code [base_matrix](https://github.com/SuLvXiangXin/zipnerf-pytorch/blob/b1cb42943d244301a013bd53f9cb964f576b0af4/internal/render.py#LL128C5-L128C17) should be orthonormal. But first two vectors are perpendicular to `cam_dirs` and the third is `directions`. Question: `cam_dirs`...
I think all is ok in realization with `directions` because you construct orthonormal basis (length of each vector is equal 1) and than you multiply it to radius value defined...
@SuLvXiangXin updated figure from the [paper](https://arxiv.org/pdf/2304.06706.pdf) Because of this, I'm leaning towards this implementation ```python3 # two basis in parallel to the image plane rand_vec = torch.randn_like(directions) ortho1 = F.normalize(torch.cross(directions,...