piranha icon indicating copy to clipboard operation
piranha copied to clipboard

Standalone version of integer kernels

Open chart21 opened this issue 1 year ago • 0 comments

Hi, I would like to better understand the tradeoffs between using built-in floating point kernels like in CryptGpu and utilizing the custom integers kernels of Piranha. I know there is a comparison in the paper for regular matrix multiplications. I want to benchmark the performance and memory overhead of the two approaches when computing convolutions and matrix multiplications of different sizes as they appear in popular neural networks.

Therefore, I wondered if there is a standalone version of the integer kernel or how to best extract it. This way, I can call the kernel from Pytorch to have a fair comparison against built-in implementations from a platform- and protocol-independent perspective.

In the piranha code, the files conv.cuh and convolution.cuh are templated and have multiple dependencies, so the task is tricky. If you have a standalone version to share, that would be superb. Otherwise, I would be grateful for any hints on how you could extract the integer kernels and make them accessible to Pytorch or other frameworks.

chart21 avatar Sep 01 '23 18:09 chart21