SparseInst
SparseInst copied to clipboard
( FIXED )WHY FPS is so low on my datasets ??
Hi, I'm wondeing why the fps on so low??? Is is because my mask are too much? Due to the datasets. I have set my config: cfg.MODEL.SPARSE_INST.DECODER.NUM_MASKS = 200 cfg.MODEL.SPARSE_INST.DECODER.NUM_CLASSES = 1
I use config:sparse_inst_r50_giam_fp16.yaml to train my model, whitch outcome result is bad, FPS is so low. and my final.model is so big ( 395,974.225 bytes )
Test : Due to my val dataset : 2 pics so I change the test_net.py ( durations[100:] -> durations[1:] ) , I'm wonder if is it correct?
def test_sparseinst_speed(cfg, fp16=False): ......
if idx % 1 == 0:
print("process: [{}/{}] fps: {:.3f}".format(idx,
len(data_loader), 1/np.mean(durations[1:])))
evaluator.process(inputs, [{"instances": output}])
#evaluate results = evaluator.evaluate() print_csv_format(results)
latency = np.mean(durations[1:]) fps = 1 / latency print("speed: {:.4f}s FPS: {:.2f}".format(latency, fps)) Result : speed : 0.2337s , FPS : 4.09
could you solve it, buddy? I think it's so hard
I find out the reason is that the first image and second image always take lots of inference time, so the FPS calculation should ignore the first and second image, but i dont know how to explain it and why is durations[ 100 : ] instead of durations[ 2 : ]. Perhaps Author could help explain? tks.
I print the durations from the test_net.py code, you can notice that all the inference time are low (0.0..) except the first and second image (0.29, 0.24). [0.2986535410163924, 0.243336574989371, 0.03537970897741616, 0.03437506000045687, 0.03500638995319605, 0.018007997889071703, 0.017738210037350655, 0.03445056697819382, 0.017488110926933587, .................. ]
So, if you calculate FPS with first and second image durations, it will bring low FPS result.