foxkw
foxkw
> > > loss不平均也行吧,反正你也要调整learning rate,或者不同loss之间的balance weight 大佬,在训练CULane数据集时,我发现在前10个eopch,train_loss下降, val_loss经常维持在某个数值左右(例如0.5),请问您有遇到过吗?不知道这是否是过拟合
Hi, my lane detection project need the labeling tool to generate labeled image from json file. And fortunately at TuSimple/tusimple-benchmark#3 I see that you have written your own labeling tool...
> Thanks for the information - so, to confirm, is it the case then, that when used with SuperPoint, SuperGlue doesn't offer many advantages over traditional algorithms for keypoint matching,...
> The correspondences look fine. The issue is likely due to an incorrect use of `cv2.findhomography` - [I've used it in the past without any problem](https://github.com/ethz-asl/hfnet/blob/01577845583373470a0cf156a2acd972e9223bc7/hfnet/evaluation/local_descriptors.py#L16). Why are there so...
> NO. but I found that the number of points you detected is also very small, and I also encountered this situation; I don't know whether this is the cause;...
> Thanks for your work and sharing! I think I found a bug while learning your paper and your code: In utils.losses.py, function 'extract_patch_from_points', it seemed the patch extracted does...
> Hi, no I think you can just use the pre-trained model to export a pseudo ground truth on your data (Step 2), and then directly jump to the training...
> Having a higher number of labels does not necessarily mean that it is of higher quality. But if you prefer having more keypoints, you can always decrease the detection...
> One simple solution would be to keep always the same threshold, and not to enforce any topk. You will get more points on larger images, but since the threshold...
> Sorry for the late reply. I think you have to use magic_point_coco_train.yaml to train your magicpoint, the training process is independent of the data. magic_point_coco_train.yaml is used to train...