torchchat icon indicating copy to clipboard operation
torchchat copied to clipboard

Slow eval performance for .pte models

Open vmpuri opened this issue 6 months ago • 0 comments

🐛 Describe the bug

Eval is very slow for PTE models vs. non-exported models - the opposite should be true and can be observed in generate. I suspect this has to do with some improper setup of the KV cache or prefill in the eval script.

I landed https://github.com/pytorch/torchchat/pull/1053 to implement sequential prefill so that .pte files would complete the eval script successfully. We might be able to resolve this issue by porting the parallel prefill implementation from ExecuTorch.

Versions

Collecting environment information... PyTorch version: 2.5.0.dev20240716 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A

OS: macOS 14.6.1 (arm64) GCC version: Could not collect Clang version: 15.0.0 (clang-1500.3.9.4) CMake version: version 3.30.2 Libc version: N/A

Python version: 3.11.9 (v3.11.9:de54cf5be3, Apr 2 2024, 07:12:50) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime) Python platform: macOS-14.6.1-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Apple M3 Max

Versions of relevant libraries: [pip3] executorch==0.4.0a0+9129892 [pip3] flake8==6.0.0 [pip3] flake8-breakpoint==1.1.0 [pip3] flake8-bugbear==23.6.5 [pip3] flake8-comprehensions==3.12.0 [pip3] flake8-plugin-utils==1.3.3 [pip3] flake8-pyi==23.5.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] torch==2.5.0.dev20240716 [pip3] torchao==0.4.0+gite11201a [pip3] torchaudio==2.4.0.dev20240716 [pip3] torchsr==1.0.4 [pip3] torchvision==0.20.0.

vmpuri avatar Aug 27 '24 17:08 vmpuri