Tanyo Kwok
Tanyo Kwok
> I am happy to do that, but probably won't be able to until after next week. Thanks very much, @silvasean. > Out of curiosity can I ask why this...
Thanks, @silvasean @qedawkins. After rebasing and some modification to the backend configurations all the ut passed. cc @ZihengJiang @Vremold we can go on to upstream the resnet18 example :D
> I also learned that @fortianyou is working on the similar thing. Discussion and collaboration are more than welcome. That's exciting! @ZihengJiang @byronyi
It's nice that @silvasean had created the meeting invitation here: https://discourse.llvm.org/t/asia-friendly-developer-hour-mhlo-support/63625. FYI
> > Since this proposal is suggesting MHLO as a unified IR, the ownership of MHLO dialect should be considered as well. Is it going to be MLIR-community owned? >...
@ZihengJiang Congrats!!
`torch_blade.optimize` would try to export the `nn.Module` to TorchScript. But, a `torch.autograd.Function` is not supported by TorchScript. Perhaps you could register a TorchScript custom operator for the function.
> Yes, DISC did do that. But `DPadToPad` will convert `PadOp` back to `DynamicPadOp`.
> > TorchDynamo is a Python-level JIT compiler designed to make unmodified PyTorch programs faster. It does performance measurements based on TorchBench. We are doing a survey on enabling TorchBlade...
> A docker image installed with pre-built PyTorch(or some stable version) @zzpmiracle Please have a try on this image bladedisc/bladedisc:latest-runtime-torch1.12.0-cu113