cccclai
cccclai
> sym_size oh you may need this change...https://github.com/pytorch/executorch/pull/2934 In the meanwhile, [this line](https://github.com/pytorch/executorch/blob/main/examples/models/llama2/builder.py#L278) probably need to be updated because there is a bug in the constant prop pass.. ``` m...
Also ideally `qnn_executorch_backend` doesn't necessarily need to depend on the whole executorch library, just these targets: https://github.com/pytorch/executorch/blob/main/runtime/backend/targets.bzl#L13-L32
> > Also ideally `qnn_executorch_backend` doesn't necessarily need to depend on the whole executorch library, just these targets: https://github.com/pytorch/executorch/blob/main/runtime/backend/targets.bzl#L13-L32 > > That's great. We will try to refine our dependency....
> It seems to convert the scalar node of the binary op into a tensor node and allow the node to be quantized. Maybe the following two passes are also...
Thank you for trying it out! Seems like a set up issue, did you follow https://pytorch.org/executorch/0.2/build-run-qualcomm-ai-engine-direct-backend.html to install the required dependancy?
The log seems expected - is there any log that looks confusing?
Oh that was completed - "Required memory for activation in bytes: [0, 19002368]" means that, in addition to the model's weight, we need 19002368 extra memory for the activation when...
From top of my brain, we can have a pass to convert all scalar to tensor before lower it to CoremL
> No longer urgent. > > Concretely, the rank-0 "inputs" we encountered were constants, that ExecuTorch decided to keep and feed to backend during runtime. By letting Core ML backend...
Put up https://github.com/pytorch/executorch/pull/4482 as it actually should be warning instead of error