Stella Laurenzo
Stella Laurenzo
Yeah, I was figuring it needed to be optional at some point which is why I have the c guard -- it is just unconditionally true now and could use...
No, I was just going to dlopen the system one at runtime. I haven't had my coffee yet either so may be missing something, but this seems all set up...
Sorry - I could have mentioned that. I was thinking the better was may be too just let external_project pull it in. The pinning is more so that builds don't...
I was debating that point myself: convinced. I'll keep the cuda configuration goo on this patch as it is an improvement over what we had before, which I couldn't get...
We would just add the handful of enum/values inline with the function declarations that use them. Given the PyTorch precedent, this is pretty straight-forward and the simplicity of this library...
> Hey @stellaraccident, the final piece of #8971 is to enable CUDA by default. > > Given this change, iis still relevant since this is changing the behavior of CUdA...
Does this only repro on the M2? I don't think that anyone in the iree core team has that hardware.
Yes, i have a patch for this but needs reviving
Done as part of the DeviceArray work.