TensoRF icon indicating copy to clipboard operation
TensoRF copied to clipboard

I can't find appearance matrix B in code

Open jhq1234 opened this issue 2 years ago • 2 comments

Hi, thanks to make this great code and paper! I really enjoy this.

In Tensorf paper, Selection_523 appearance values(A_c(x)) are concatenated and then multiplied by appearance matrix B. And then, sent this into the decoding function S for RGB color regression.

But in this code ,https://github.com/apchenstu/TensoRF/blob/17deeedae5ab4106b30a3295709ec3a8a654c7b1/models/tensoRF.py#L223 I can't find the appearance matrix B. I understand that plane_coef_point is matrix M and line_coef_point is vector v.

From this line, https://github.com/apchenstu/TensoRF/blob/17deeedae5ab4106b30a3295709ec3a8a654c7b1/models/tensoRF.py#L239 M and V are multiplied and then go into basis_mat which is nn.Linear(144,27). And then this 27-dimension outcomes go into positional encoding block and Feature decoding function S.

During this process, I can't find any appearance matrix B mentioned in the paper. Is the self.basis_mat is matrix B? If it is not, where is matrix B and what is the self.basis_mat?

jhq1234 avatar Feb 09 '23 06:02 jhq1234

I print the tensorf below. It shows that self.basis_mat is the matrix B. It has 144*27 weights. The author uses them to express the matrix B. TensorVMSplit( (density_plane): ParameterList( (0): Parameter containing: [torch.cuda.FloatTensor of size 1x16x128x128 (GPU 0)] (1): Parameter containing: [torch.cuda.FloatTensor of size 1x16x128x128 (GPU 0)] (2): Parameter containing: [torch.cuda.FloatTensor of size 1x16x128x128 (GPU 0)] ) (density_line): ParameterList( (0): Parameter containing: [torch.cuda.FloatTensor of size 1x16x128x1 (GPU 0)] (1): Parameter containing: [torch.cuda.FloatTensor of size 1x16x128x1 (GPU 0)] (2): Parameter containing: [torch.cuda.FloatTensor of size 1x16x128x1 (GPU 0)] ) (app_plane): ParameterList( (0): Parameter containing: [torch.cuda.FloatTensor of size 1x48x128x128 (GPU 0)] (1): Parameter containing: [torch.cuda.FloatTensor of size 1x48x128x128 (GPU 0)] (2): Parameter containing: [torch.cuda.FloatTensor of size 1x48x128x128 (GPU 0)] ) (app_line): ParameterList( (0): Parameter containing: [torch.cuda.FloatTensor of size 1x48x128x1 (GPU 0)] (1): Parameter containing: [torch.cuda.FloatTensor of size 1x48x128x1 (GPU 0)] (2): Parameter containing: [torch.cuda.FloatTensor of size 1x48x128x1 (GPU 0)] ) (basis_mat): Linear(in_features=144, out_features=27, bias=False) (renderModule): MLPRender_Fea( (mlp): Sequential( (0): Linear(in_features=150, out_features=128, bias=True) (1): ReLU(inplace=True) (2): Linear(in_features=128, out_features=128, bias=True) (3): ReLU(inplace=True) (4): Linear(in_features=128, out_features=3, bias=True) ) ) )

LiuTielong avatar Feb 14 '23 03:02 LiuTielong

@LiuTielong Oh! Now I can understand this part! Thx for your kind comments!

jhq1234 avatar Feb 23 '23 06:02 jhq1234