StructDepth
StructDepth copied to clipboard
What changes did you make in the skimage code
Thanks for sharing code! What changes did you make in the skimage code,can you release the changing code and explain why ?
Hello!I met a problem and want to ask if you have ever met it before.I made some changes to the network, and at first it can run successfully,but after several epochs,the problem occur:
no, pred_depth is generated by _, depth = disp_to_depth(disp, self.opt.min_depth, self.opt.max_depth) maybe you could check this and then continue to check according to the function call relationship.
OK,thanks!I'll try it.And I want to ask when I haven't make changes to the network,this problem didn't occur, and after I made changes,it occur,can it explain the problem is in the network?
yes
OK!
I think maybe I found the reasons, it's the GPU memory problem,I can continue trainig on the previous model.And I want to ask is it normal that the accuracy decrease sharply for the first several epochs after I add network to the original network?
i think yes, since it is normal that adding a not compatible network at first.
Thanks!It becomes better now!😃
---Original--- From: @.> Date: Wed, May 3, 2023 07:59 AM To: @.>; Cc: @.@.>; Subject: Re: [SJTU-ViSYS/StructDepth] What changes did you make in the skimagecode (Issue #8)
i think yes, since it is normal that adding a not compatible network at first.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Hello!I want to ask you some questions about structdepth.I want to use it to be my baseline(without the losses),and I don't know if the result of it is the DSO one in this picture?
And when I run structdepth with the pretrained model on the Imagenet, the problems occurred:
it didn't change!So strange.At first,I used the P2net's model as the structdepth do as pretrained model, the result is normal,because I want to use the one without structdepth's losses as my baseline,I want to change the pretrained model(use the Imagenet one).Have you ever meet this problems?
no, i haven't. i think that maybe training structdepth needs a good depth network at first. That means, if you change the pretrained model, the depth network will be far worse than p2net.
Yes,it needs the good network when using the norm loss and the planar loss,but if I don't use the two losses at first,and change the network,training a good network and use losses later,is that ok?I do this,and the problem occurred.Thanks for your reply!
Original Email
Sender:"goon"< @.*** >;
Sent Time:2023/5/8 21:33
To:"SJTU-ViSYS/StructDepth"< @.*** >;
Cc recipient:"Cresynia"< @.*** >;"Comment"< @.*** >;
Subject:Re: [SJTU-ViSYS/StructDepth] What changes did you make in the skimagecode (Issue #8)
no, i haven't. i think that training structdepth needs a good depth network at first. If you change the pretrained model, the depth network is far worse than p2net.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
I didn't try it. I think it could work.
Ok,thanks!I want to ask the p2net without its planar loss is the same as the struct depth without its Manhattan norm loss and planar loss?
---Original--- From: @.> Date: Tue, May 9, 2023 18:11 PM To: @.>; Cc: @.@.>; Subject: Re: [SJTU-ViSYS/StructDepth] What changes did you make in the skimagecode (Issue #8)
I didn't try it. I think it could work.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Sorry for late, if "same" refer to accuracy, maybe not the same, since batch size, training set, pretrained model are different. If we don't take these into account, i think they are the same, both use same network structure and training method, both sparse rgb points.
OK!Thanks!
---Original--- From: @.> Date: Fri, May 12, 2023 16:25 PM To: @.>; Cc: @.@.>; Subject: Re: [SJTU-ViSYS/StructDepth] What changes did you make in the skimagecode (Issue #8)
Sorry for late, if "same" refer to accuracy, maybe not the same, since batch size, training set, pretrained model are different. If we don't take these into account, i think they are the same, both use same network structure and training method, both sparse rgb points.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Hello! I want to know how to get the gound truth image of the picture in the NYUv2 dataset, I'm not clear about it, could you give some idea? Thanks!
This was downloaded a long time ago, it should be in the evaluation section of this website https://github.com/svip-lab/Indoor-SfMLearner
Thanks! I'll see it.
Thanks so much !!!!!! I can do it !!! Thanks !!!! 😊
happy to hear that