cumulative-reasoning icon indicating copy to clipboard operation
cumulative-reasoning copied to clipboard

Question about CUDA memory requirements to run code

Open frostfox661 opened this issue 1 year ago • 3 comments

When I run the file "folio-direct-llm.py" using single 4090 and llama-7b, I often get CUDA memory overflow issues. After adding the code to clear the CUDA cache for each case loop, and monitoring the storage situation, it was found that the CUDA memory will increase cumulatively at the specific node. What is going on? What is the equipment environment required to run this project? 8eda7a3fc73b09acbff41b054a06905

frostfox661 avatar Oct 30 '24 01:10 frostfox661

The guidance library might have some caching mechanism for multiple queries of the same context, we suggest you to run it on A100-80GB.

yifanzhang-pro avatar Oct 30 '24 01:10 yifanzhang-pro

Thank you for your reply. What are the special advantages of using the guidance library in this project? Can other libraries be used as an alternative?

frostfox661 avatar Oct 30 '24 07:10 frostfox661

You may try using alternative libraries, though the prompt may need adjustment for compatibility with different models and libraries, and the results may vary.

yifanzhang-pro avatar Oct 30 '24 18:10 yifanzhang-pro