asraa

Results 448 comments of asraa

> I wonder to what extent CSE/canonicalization can give us (some of these) for "free"... I would assume at a low enough level it would, but only at the point...

> Relevantly, OpenFHE includes APIs for the hoisted and optimized operations as EvalFastRotationPrecompute and EvalFastRotation. So cool, thanks @eymay - it's great that there's OpenFHE support for this, we can...

Should we close this given that we can do this (at least for the openfhe dialect) and can file an issue for lattigo / others

I agree about the packing. I was drafting this out with Lawrence the other day and think we probably want some kind of operation that will eventually lower down to...

> Do arithmetic FHE approaches to ML use quantization? We did the quantization in those passes for CGGI. Here's one complication with out CGGI quantization approach. These quantize weights to...

> I know @j2kun mentioned lowering from linalg.matmul (or linalg.quantized_matmul). Another weird complication came up. It turns out that recently TFL -> TOSA lowers `tfl.fully_connected` now to `tosa.conv2d` which now...

A stablehlo fully connected layer looks like ``` %1 = stablehlo.dot_general %0, %cst_12, contracting_dims = [1] x [0], precision = [DEFAULT, DEFAULT] : (tensor, tensor) -> tensor %2 = stablehlo.add...

Looking at the above IR, maybe you can target `linalg.matmul` and we can write rewrite patterns converting some of the other linalg ops we're noticing into linalg.matmuls?

> Yeah, I'm starting to think that's a better option, but would linalg.conv_2d_nhwc_fhwc also be transformed to linalg.matmul? I found a new pathway. Given a TOSA model containing the new...