cccclai

Results 217 comments of cccclai

Discuss a bit, there are two action items 1. Convert symint etc to tensor, then they can be consumed by coreml IR. 2. Remove the assert in the model definition,...

It's probably better to have coreml consume these assert ops. For llama specifically, are those check [from the separate branch that are only for batch prefill](https://github.com/pytorch/executorch/blob/main/examples/models/llama2/llama_transformer.py#L485-L500)? If yes, can we...

@angelayi yeah I meant removing those `.item` call (like removing the if statement `if self.enable_dynamic_shape`), and just use else statement [these lines](https://github.com/pytorch/executorch/blob/main/examples/models/llama2/llama_transformer.py#L187-L192). @kimishpatel added the lines in `if self.enable_dynamic_shape` because...

> Hi @cccclai , As previous discussions, versions 2.30 and 2.31 have a regression in Llama. So, please stick with QNN 2.28 for internal Llama CI until we sort this...

> > > Hi @cccclai , As previous discussions, versions 2.30 and 2.31 have a regression in Llama. So, please stick with QNN 2.28 for internal Llama CI until we...

As discussed in the meetings, let's only bump the version in open source and error out when users try to run online prepare with versions older than 2.30

Looks like the version upgrade is no longer part of the pr, let me run the CI again. If everything is green, we can merge it

Lots of CI failure...need to fix them and send the patch

Can you add this change? ``` --- a/fbcode/executorch/backends/qualcomm/runtime/targets.bzl +++ b/fbcode/executorch/backends/qualcomm/runtime/targets.bzl @@ -43,14 +43,18 @@ [ "*.cpp", "backends/*.cpp", + "backends/irbackend/*.cpp", "backends/htpbackend/*.cpp", - ] + (["backends/htpbackend/x86_64/*.cpp"] if include_aot_qnn_lib else ["backends/htpbackend/aarch64/*.cpp"]), + ]...