Scaffold-GS
Scaffold-GS copied to clipboard
How did you configure your test set?
Thanks for a great paper.
I was wondering, how did you evaluate the numbers like PSNR, SSIM, etc. in your paper?
My question is how many test cases did you pull out of the total number of datasets to evaluate the numbers.
I ask because it doesn't seem to be directly mentioned in the paper.
Thanks. We follow the common configurations for those without an official test split: select 1 frame from every 8 frames. For BungeeNeRF, we choose the first 30 frames as test set. Details in https://github.com/city-super/Scaffold-GS/blob/da97ef8257b46d51c432df0df8b62f7c3a3c1079/scene/dataset_readers.py#L165-L178.
Hi, I have a followup question for this: I see that the appearance embedding is constructed based on the number of views in train cameras, and when shifting to eval mode, the uid of the test camera is directly used to query the learned embedding.
If I understand the appearance embedding correctly, it is set up so that the view-dependent effect can be better encoded; but since the test cameras and train cameras have different views, and so their uids have different meaning in this aspect, I think querying the same learned embedding would lead to wrong effect ? Thanks