Simon Zhang

Results 15 comments of Simon Zhang

This looks like an issue you should ask the HAL cluster maintainers. According to my experience, try `module load cmake`. You can also try setting up a fresh conda environment...

out of memory on CPU side most likely. What kind of data are you running on?

might want to sparsify the matrix: https://ripser.scikit-tda.org/en/latest/notebooks/Approximate%20Sparse%20Filtrations.html ; after forming the coo matrix, should probably try the --sparse option.

Please see the following gist that shows how to use the distance matrix sparsification algorithm with ripserplusplus: https://colab.research.google.com/gist/simonzhang00/5b34155b41edc27aa5e47100bda1b2a5/ripserplusplus-distancematrix-sparsification.ipynb This shouldn't be that hard to do yourself ;)

Please look at the gist, and notice the line: `resultsparse= rpp_py.run("--format sparse", DSparse)` This is how you can read COO matrices for ripserplusplus. `resultsparse= rpp_py.run("--format sparse --sparse", DSparse)` should run...

What does that have to do with ripserplusplus? It appears like you are having systems trouble with your cluster which I am not responsible for. Why not just work in...

Go to the menu bar and click Runtime-> Change runtime type -> set Runtime shape to Standard. You shouldn't really need Colab Pro unless you need to train for longer...

After enough sparsification (large enough epsilon) there should rarely be memory issues. Usually you would run out of RAM after hours of computation. Do not forget to use the --sparse...

Hi, Thank you for your interest in using GPUs for PH computation. The design of our program was meant for one GPU per data. So if you wanted to use...