DeepCrack
DeepCrack copied to clipboard
Question about F-measure result.
Hi, I got a question about the code you provided that shows the metrics during traning or testing is 'accuray', 'postive sample accuracy' or 'negative sample accuracy', as the metric decides which checkpoint should be saved meanwhile, is that right? But the paper tolds us your model has achieves F-measure over 0.87 on the three test datasets. And how can you calculate F-measure by those two or three metrics? Or the code you provided is NOT completed? Just curious.
Hi, I got a question about the code you provided that shows the metrics during traning or testing is 'accuray', 'postive sample accuracy' or 'negative sample accuracy', as the metric decides which checkpoint should be saved meanwhile, is that right? But the paper tolds us your model has achieves F-measure over 0.87 on the three test datasets. And how can you calculate F-measure by those two or three metrics? Or the code you provided is NOT completed? Just curious.
老哥,你这个代码跑通了吗?能不能问你一些问题啊
请讲。(DeepCrack的代码,我只使用了模型部分
嗨,我有一个关于您提供的代码的问题,该代码显示训练或测试期间的指标是“准确度”、“正样本准确度”或“负样本准确度”,因为指标决定了应同时保存哪个检查点,对吗? 但是论文告诉我们,您的模型在三个测试数据集上的 F-measure 均超过 0.87。您如何通过这两个或三个指标计算 F-measure?或者您提供的代码未完成?只是好奇。
老哥,你这个代码跑通了?能不能问你一些问题啊
请讲。(DeepCrack的代码,我只使用了模型部分
请问 ODS OIS AP的程序 您知道在哪吗
@lian666-ch The precision-recall results can be calculated by using the edge detection benchmark of Berkeley. The codes can be found at https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/