cccclai
cccclai
Closing stale PR
Hi @cymbalrush, sorry we're reverting this pr because it breaks the CI. Do you mind sending out the PR again and address the IC? Thanks!
It seems like the test is failing ``` AssertionError: False is not true : ref_output: tensor([[ 0.5914, -0.8622, 0.0214, -0.3349, 0.0285, -0.9833, -1.2256, 0.3349, -0.7197, 0.3278]]) model_output: tensor([[ 0.5914, 0.5914,...
Actually the failing might due to this change https://github.com/pytorch/executorch/pull/10362, can you help checking if the test is still passing with https://github.com/pytorch/pytorch/pull/151436?
> Actually the failing might due to this change #10362, can you help checking if the test is still passing with [pytorch/pytorch#151436](https://github.com/pytorch/pytorch/pull/151436)? Any update on this change? https://github.com/pytorch/executorch/pull/10362 is currently...
Given that CI is green again, we can resume merging PRs now. Mind rebasing? This seems to be the oldest PR that needs to land
After you add these ops to the skip list, were you able to get the model or still failing? ``` "aten.linear.default", # Causes NPE (weight_tensor is None when calling define_tensor)...
@haowhsu-quic @shewu-quic @winskuo-quic @chenweng-quic @DannyYuyang-quic let's see how to help on this
Hey thank you for trying out, can you try [the partitioner flow](https://github.com/pytorch/executorch/blob/main/backends/apple/coreml/test/test_coreml_partitioner.py#L65-L69)? It will partially lower the model and the failing part will fall back to cpu
Try these lines? ``` edge_program = to_edge(aten_dialect) edge_program = edge_program.to_backend( CoreMLPartitioner( skip_ops_for_coreml_delegation=["aten.convolution.default"] ) ) et_program = edge_manager.to_executorch() with open("models/yolo_executorch_coreml.pte", "wb") as file: file.write(et_program.buffer) ```