Remove a node in Pinsage
Hey,
Thanks very much for implementing the library. I am trying to experiment on a large dataset using PinSageDGL, and found that with time, the GPU memory creeps up, there are with the same batch size:
Do you know why it may happen? I am wondering whether it's because the graph getting bigger with time, it needs to hold more weights, user identification etc. If so, do you know if I can delete a non-activate user node or item node without a full refresh retraining?
Hi,
I don't think the growing graph is causing the problem, since the graph remains on CPU and only the sampled batches and subgraphs are transferred to GPU.
Given that you mentioned using a large dataset, are you using incremental training?
Hello,
Thanks for the quick reply. Yes, I am using incremental training, load my data in batches, using merge_dataset, so my n_users and n_items are increasing, hence I am wondering whether it's to do with loading user identification etc?
Each user and item requires an embedding vector in the model. With incremental training, the growing number of users and items will increase memory usage.
With the number increace, it will need bigger and bigger GPU memory. Can we remove the non-active users and items and reindex them so that the user and item embedding vectors stay similar size? Or do you recommend to do a full-training from time to time?
This library does not support the removal of non-active users and items, as it is designed for scenarios where the entire model fits into memory.
For larger, real-world applications with memory constraints, the "hashing trick" is a common solution, which assigns a fixed hash size to all users and items. This is implemented in a new library I am developing.
Do you have any papers or blogs on those "hashing trick" to share? Why a new library rather than extending this library, just curious :).
Just ask Gemini 2.5 Pro, "What is hashing trick in recommender system?"
This library uses TensorFlow, which is hard to refactor and almost dead nowadays. The new library will use PyTorch and include a number of new features.