cccclai
cccclai
Closing stale PR
> Hmm @cymbalrush @guangy10 something seems off here. Can you share the etdump here? Profiling has overhead but it shouldn't be that much. For the more prominent overhead (small models...
Should we close this one?
@YifanShenSZ any chance we know eta for this feature?
From this error log: ``` File "/Volumes/Models/LLM/Framework/CoreMLTools-Dev_ExecuTorch-0.2/envs/llama-py310/lib/python3.10/site-packages/executorch/exir/backend/backend_api.py", line 113, in _ copied_edge_program = copy.deepcopy(edge_program) ``` Looks like the program is not copiable. Can you try running `copy.deepcopy(exported_program)` before running `to_backend`...
I was able to repro, looking ``` Traceback (most recent call last): File "/Users/chenlai/coreml_debug/executorch/backends/apple/coreml/test/test_coreml_partitioner.py", line 118, in test_runner.test_buffer() File "/Users/chenlai/coreml_debug/executorch/backends/apple/coreml/test/test_coreml_partitioner.py", line 102, in test_buffer delegated_program_manager = edge_program_manager.to_backend(CoreMLPartitioner()) File "/opt/homebrew/anaconda3/envs/coreml_debug/lib/python3.10/site-packages/executorch/exir/program/_program.py", line...
After checking, some changes need to be added to the delegate infra to support consuming the mutable buffer. It is our first case to have the backend to consume the...
> @cccclai can you describe what is the nature of the change? We'd need to add some changes in `lowered_backend_module.py` and `backend_api.py` to support backend consuming the in-place ops. Since...
There are some progress from @angelayi on this. https://github.com/pytorch/executorch/pull/4566 is first pr and there will be one more pr to resolve it.