PyHa icon indicating copy to clipboard operation
PyHa copied to clipboard

Figuring out how to reduce memory burden when processing large datasets

Open JacobGlennAyers opened this issue 1 year ago • 1 comments

Currently, it appears that PyHa crashes when generating automated labels of particularly large datasets. I suspect that this is due to the size of the automated dataframe becoming too large.

Potential fixes:

  1. Convert floats being stored int 8 byte floats (float8). By default, Pandas uses float64.
  2. Try to use the builtin csv python library. It could be that we just create the individual dataframes on each clip, and then we just append to some master csv file. This would hopefully shift the burden from memory onto storage.
  3. Look into parallelization with DASK (this may speed things up, but I am skeptical if it addresses the memory problems)

JacobGlennAyers avatar Oct 28 '22 19:10 JacobGlennAyers