Scott Roy
Scott Roy
@adonnini can you try running your code from a directory that does not contain the cloned executorch repo?
@helloguo do you have any testing on the accuracy of this method? I ask because I think Sleef guarantees a certain accuracy with their polynomial approximations, and so this might...
I took a closer look. When dim order is enabled (now the default), this model has “executorch.exir.dialects.edge._ops.dim_order_ops._to_dim_order_copy.default” ops that return floats, and this op is not recognized by CoreML (https://github.com/apple/coremltools/blob/main/coremltools/converters/mil/frontend/torch/ops.py),...
> [@digantdesai](https://github.com/digantdesai) [@Gasoonjia](https://github.com/Gasoonjia) > > [@shoumikhin](https://github.com/shoumikhin) had to disable dim order https://github.com/pytorch-labs/executorch-examples/pull/23/files when exporting In terms of why CI did not catch this when dim order was enabled by default,...
> > 1. add support for _to_dim_order_copy to coremltools > > [Yep](https://github.com/apple/coremltools/blob/8.1/coremltools/converters/mil/frontend/torch/test/test_torch_export_conversion_api.py#L74-L96) But that doesn't support the dim order op, so the partitioner will still skip it.
> Ready for another look @metascroy, thanks! Changes look good! Did you rerun the tests after making them?
@WeiMa01 can you provide more details? Maybe @swolchok knows?
> Looks like the imports aren't happy. I wonder if we need a torchao pin bump? Wanna give that a try? Yeah, you will need to update the torchao pin...
The main concern I have is shared embedding quantization must be done first. Not sure how to ensure that in torchchat. cc @Jack-Khuu
> Last little bit: can you add the new quants into the CI https://github.com/pytorch/torchchat/blob/main/.github/workflows/pull.yml > > Essentially whereever you see `embedding:wx` in `pull.yml`, just add another call in that same...