[lang] Free ndarray memory on the base class
classes inheriting from Ndarray should have their memory released
Issue: #8763
Brief Summary
ScalarNdarray implements del() to allow python to get rid of the allocated memory, but other subclasses of Ndarray (like VectorNdarray() or MatrixNdarray()) don't implement the same behaviour. Seems to me the del() method should be moved to the Ndarray class. I tested this simple change and indeed it fixes the memory leak.
Walkthrough
import taichi as ti
import taichi.types as tt
ti.init(ti.gpu)
while True:
arr = ti.ndarray(tt.vector(3, ti.f32), (1000, 1000))
arr = arr.to_numpy()
# whatch gpu memory skyrocket
If, instead:
import taichi as ti
import taichi.types as tt
ti.init(ti.gpu)
while True:
arr = ti.ndarray(ti.f32, (1000, 1000, 3))
arr = arr.to_numpy()
# whatch gpu memory basically stay the same
[!NOTE] Move deletion logic to
Ndarray.__del__so all ndarray variants free device memory; remove subclass-specific__del__fromScalarNdarray.
- Memory management:
- Add
Ndarray.__del__to delete underlyingarrviart.prog.delete_ndarray(...)when runtime/program exist.- Remove
ScalarNdarray.__del__; destruction is now handled centrally by base class for all ndarray subclasses.- Files:
- Updated
python/taichi/lang/_ndarray.pyto centralize deletion logic and add safety checks (impl/arr/runtime/progpresence).Written by Cursor Bugbot for commit 4e581320b5f561f6f1711f0b32853b364cf90a18. This will update automatically on new commits. Configure here.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
Bernardo Covas seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.