pats
pats copied to clipboard
speed and memory
my inference size is 640 x 480, tested in 3090, when i set if_local as False, the pipe time is 1.22s, memory costing is large to 22G. however, when setting if_local to True, the pipe time is 2s, memeoy costing is 5.4G. https://github.com/zju3dv/pats/blob/98d2e03a80acb4cc94724117db41c17e09268d79/configs/test_demo.yaml#L9
the result is so difference, Any suggestions for solving this problem?
The "if_local" choice try to pre-decide the matched pairs from multiple possible ones, and reduce the space cost. It just can provide a trade-off beween time and space.
But our pre-decision algorithm here is not efficient enough and seems to crop some of useful pairs incorrectly, maybe you can write a more efficient one?
I have some problems with the code implementation. Could you please explain roughly what these functions do? Thanks.
https://github.com/zju3dv/pats/blob/98d2e03a80acb4cc94724117db41c17e09268d79/models/first_layer.py#L122
https://github.com/zju3dv/pats/blob/98d2e03a80acb4cc94724117db41c17e09268d79/models/first_layer.py#L135
https://github.com/zju3dv/pats/blob/98d2e03a80acb4cc94724117db41c17e09268d79/models/first_layer.py#L140