Sunflower7788

Results 107 comments of Sunflower7788

您好,是训练自己的数据集吗?训练过程中损失是否正常下降呢?

您好,可以再检测一下图像是否正确,是否有损坏。另外可能是安装包和环境不对应,请自查下CUDA和Paddle版本

您好,OCR检测模型配置文件中的图像读入模式是 img_mode: BGR 只有其一能检测是正常的

是两个算法 svtrnet 的精度会更高一些,但是推理不支持变长,我们推荐用v4

> > My recent commit should fix the problem. [81714a3](https://github.com/KaihuaTang/Scene-Graph-Benchmark.pytorch/commit/81714a337ada463f428b2d0a1521730c5c597571) > > Hi, I met the same problem, and I used the newest commit [1b955c6](https://github.com/KaihuaTang/Scene-Graph-Benchmark.pytorch/commit/1b955c608aa5c93ce25ee861bb60dea066f45f55#diff-a3e17e5e7b489553693b88e1f4f743564d8fcff78091374ad07df1b23c35aef9) > > > File "/home/xxx/Project/Scene-Graph-Benchmark.pytorch/maskrcnn_benchmark/data/datasets/visual_genome.py",...

Thanks. Will these filtered data be released?

thanks. By the way, do you use COCO Caption and VG these two dataset to train stage1? Or just CC3M CC12M SBU and LAION115M for stage1?

Is the code for CapFilter method released? no matter in BLIP or BLIP2.

您好,抱歉,我们暂未发现您所提到的问题,请您提供下运行代码,我们跟进复现下。