torchchat
torchchat copied to clipboard
Memory usage is wrong (reporting 0) for non-CUDA commands
🐛 Describe the bug
For example
Memory used: 0.00 GB
> python3 torchchat.py generate llama3.1 --dso-path exportedModels/llama3.1.so --prompt "Hello my name is"
NumExpr defaulting to 10 threads.
PyTorch version 2.5.0.dev20240710 available.
Warning: checkpoint path ignored because an exported DSO or PTE path specified
Warning: checkpoint path ignored because an exported DSO or PTE path specified
Using device=mps
Loading model...
Cannot load specified DSO to mps. Attempting to load model to CPU instead
Time to load model: 0.20 seconds
-----------------------------------------------------------
Hello my name is Julia and I am a Junior at the University of Washington studying Communications with a focus in Public Relations. I am also a part of the University’s Public Relations Student Society of America (PRSSA), where I currently hold the position of Secretary.
In my free time, I love to stay active whether it’s hiking, running, or trying out new workout classes. I am also passionate about photography and capturing life’s precious moments. Some of my favorite places to visit are the beaches of Half Moon Bay in California and the mountains of Whistler, BC.
This is my blog where I will be sharing my thoughts on PR, advertising, and other marketing related topics. I hope you enjoy reading and will also consider sharing your thoughts with me! Feel free to follow me for more updates on my adventures and musings.
I look forward to connecting with you and learning more about the PR world! – Julia
Hi Julia! I think your blog is a great idea! As a fellow UW student
Time for inference 1: 92.83 sec total, time to first token 4.04 sec with sequential prefill, 199 tokens, 2.14 tokens/sec, 466.49 ms/token
Bandwidth achieved: 34.43 GB/s
*** This first iteration will include cold start effects for dynamic import, hardware caches. ***
========================================
Average tokens/sec: 2.14
Memory used: 0.00 GB
Versions
Collecting environment information... PyTorch version: 2.5.0.dev20240710 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64) GCC version: Could not collect Clang version: 15.0.0 (clang-1500.1.0.2.5) CMake version: version 3.30.1 Libc version: N/A
Python version: 3.11.9 (v3.11.9:de54cf5be3, Apr 2 2024, 07:12:50) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime) Python platform: macOS-14.5-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True
CPU: Apple M1 Max
Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] torch==2.5.0.dev20240710 [pip3] torchao==0.3.1 [conda] Could not collect
Seems like this field is populated from torch.cuda.max_memory_reserved(), so it's only populated when using cuda
https://github.com/pytorch/torchchat/blob/a3bf37d0dbac56c8c747e0610c1e2403cd386dc6/generate.py#L830
I am using this pytorch 2.5 rocm and python 3.10. pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.1
(py_3.10) root@ae03f6964c19:~/torchchat# python3 torchchat.py generate llama3.1 --prompt "write me a story about a boy and his bear" NumExpr defaulting to 12 threads. PyTorch version 2.5.0.dev20240801+rocm6.1 available. Using device=cuda AMD Radeon VII Loading model... Memory access fault by GPU node-1 (Agent handle: 0x8d96110) on address (nil). Reason: Page not present or supervisor privilege. Aborted (core dumped)
@g2david Seems like either the GPU access is not enabled on your machine or there a over utilization of memory
One other thing to check is whether the base README instructions gets you the same error (to eliminate a dependency bug) Can you spin up a new GitHub issue?
Marking this as actionable with any of the options that follow: a) Show the field only when Cuda is available b) Populate the field with the non-cuda equivalents