Gregory Comer

Results 25 issues of Gregory Comer

Summary: Update XNNPACK library version. Test Plan: Combined diff CI is clean: D61586079 (all changes, has to be split out for export). Differential Revision: D61822610

fb-exported
ciflow/binaries
ciflow/trunk
ciflow/periodic

This change extends the xnn_define_blockwise_quantized_tensor_value API to accept flags to control block scale format, though only bf16 is currently supported. The intent of this change is to allow for other...

### 🐛 Describe the bug The execute function in module.cpp will cause silent memory corruption if too many input EValues are passed. https://github.com/pytorch/executorch/blob/976fe484c811277252756a39a9b6c76fd8c6e3cb/extension/module/module.cpp#L233-L244 If input_values.size() > inputs.size(), the indexing operation...

good first issue
module: extension

Summary: Currently, ExecuTorch will serialize any parameters in the exported program, regardless of whether they are actually used. Exporting with strict=True will remove unused parameters, but strict=False will not. Export...

CLA Signed
fb-exported
module: exir
release notes: exir

### Summary Add support for delegating view_copy in the XNNPACK delegate via the XNN static_reshape operator. This includes support for up to one dynamic dimension. It also includes conditional support...

CLA Signed
fb-exported
release notes: xnnpack
stale
meta-exported

Differential Revision: D86588171

CLA Signed
fb-exported
release notes: exir
meta-exported

### Summary Add asserts to ensure that the backend option key field is large enough to contain the xnnpack workspace sharing key in test_workspace_sharing.cpp.

CLA Signed
release notes: none

The v1.0.1 release will be cut from the "[release/1.0](https://github.com/pytorch/executorch/tree/release/1.0)" branch for critical fixes to the [v1.0.0](https://github.com/pytorch/executorch/releases/tag/v1.0.0) release. Intended Release Date: 11/20/2025 Cherry-Pick Submission Cutoff: 11/17/2025 This issue is for tracking...

release tracker

Summary: When non-memory-planned outputs are unset, method execution will crash when writing to the output tensor. This manifests as a native crash with a deep stack trace that is both...

CLA Signed
fb-exported
release notes: none
stale

Clean up _clone_dim_order ops in the graph which don't change dim order. cc @JacobSzwejbka @angelayi

module: exir