DKM
DKM copied to clipboard
Whether the inference speed can be optimized
Great job! I do inference on 3090, it takes about 0.7 seconds to calculate the time of two images, and what other operations can be used for inference acceleration, in addition to reducing the image resolution.
You can try fp16, see my code in roma for some examples.
I haven't gotten torch.compile to work, but it could improve things.