AdversarialTexture
AdversarialTexture copied to clipboard
About Qianyi Zhou's data - the fountain model
It's a great work and thank you for sharing the code!
I have tried to run the code with your chair data, and it did show a significant improvement compared to L1 result.
Then, I edit render_scan.py file to read data from Zhou's fountain model (I choose around 30 RGBD frames as key frames, and use code from "Let there be color" to get the obj and mtl files from all 30 key frames.), I run the code with default parameters (λ=10.0, iter=4001).
But I don't get the fine result as the supplemental shows.
Here's what I get:
Before (L1):
After Iterations:
Is anything missing to run such RGBD datasets?
Thanks again for the interesting work!
Hi @OneEyedEagle Where did you download the obj
and mtl
fils for the fountain dataset? I remember Let there be color project from here but this seems to be offline now. Is there a mirror for the dataset? If so, could you drop the link here please?
Update:
I was able to generate the obj
with uvs
(required for pre-processing) using xatlas
here and then pass it on to the pre-processing step and run the network on the pre-processed data. Hope this helps someone.
Hi @OneEyedEagle ,I have been seeing you three times within a week,What a coincidence!(Patch-Based 、G2LTex 、AdversarialTexture)