cumulative-reasoning
cumulative-reasoning copied to clipboard
Question about CUDA memory requirements to run code
When I run the file "folio-direct-llm.py" using single 4090 and llama-7b, I often get CUDA memory overflow issues. After adding the code to clear the CUDA cache for each case loop, and monitoring the storage situation, it was found that the CUDA memory will increase cumulatively at the specific node. What is going on? What is the equipment environment required to run this project?
The guidance library might have some caching mechanism for multiple queries of the same context, we suggest you to run it on A100-80GB.
Thank you for your reply. What are the special advantages of using the guidance library in this project? Can other libraries be used as an alternative?
You may try using alternative libraries, though the prompt may need adjustment for compatibility with different models and libraries, and the results may vary.