cubes icon indicating copy to clipboard operation
cubes copied to clipboard

[Discussion] I made a graph that allows me to estimate about how big a cubes_n.npy file will get (in bytes) when given n cubes.

Open TheoCGaming opened this issue 1 year ago • 4 comments

https://www.desmos.com/calculator/fea4uymhix According to this graph, file sizes will get ridiculously large even by the 12th iteration. Perhaps the storage format should be optimized?

TheoCGaming avatar Jul 13 '23 04:07 TheoCGaming

Here is how to optimize the current storage code, but this still doesn't solve the fast growth of the sheer number of cubes:

np.save(cache_path, np.packbits(np.asarray(polycubes, dtype=np.int8), axis=-1), allow_pickle=False)

notes:

  • it uses np.packbits
  • allow_pickle by default is True. But we don't have object arrays, so we don't need pickles.
  • the np.load should also have allow_pickle=False. Its output should then be processed with np.unpackbits() with axis=-1, and also you'll have to undo the effect of packing a size that is not a multiple of 8; possibly with size parameter, or manually crop the last zeros

VladimirFokow avatar Jul 14 '23 14:07 VladimirFokow

About storing the cubes

Note: Here I've ignored

  • lossless compression
  • information theory? - have no idea if it would be useful at all (to encode the polycubes more efficiently - to store them using less space. Have no idea how.)

For n=16 (the current record):

The theoretical minimum to store 50 billion DIFFERENT THINGS (not even polycubes, but at least their ids): To uniquely identify each of the 50 billion things - we need at least log_2_{50e9} = 36 bits for each id. Needed storage space: ~ 50e9 * 36 bits = 225 GB

For n=20:

Let's say we want to make progress until n=20.

Assuming the number of polycubes grows by a factor of 7 with each n.

n_cubes = 50e9 * 7**4  # approx.
n_bits = np.log2(n_cubes)
need_bytes = n_cubes * n_bits / 8 

need_bytes / 1e12  # terabytes

~ 700 TB

For n=30:

n_cubes = 50e9 * 7**14  # approx.
n_bits = np.log2(n_cubes)
need_bytes = n_cubes * n_bits / 8 

need_bytes / 1e21  # zettabytes

~ 317 ZB

This number is on the scale of the whole internet.

  • So it's safe to assume that nobody will ever want to store all the cubes for large n.

VladimirFokow avatar Jul 14 '23 19:07 VladimirFokow

And not to mention that you have to store all of that in ram before you actually write it, meaning that if it remains uncompressed, your computer (or program) will crash if you make the polycube too big. Even then it's not a matter of "if", it's a matter of "when". Compressing it will only make it crash earlier and may make it run slower.

I'm not against the idea of compression, these are just things to consider.

TheoCGaming avatar Jul 15 '23 01:07 TheoCGaming

you have to store all of that in ram before you actually write it

You don't actually have to store it all in RAM at the same time.

Polycubes can be processed and counted separately from each other (it will just take longer), and it can be distributed across multiple machines.

See an example algorithm I wrote here: https://github.com/mikepound/opencubes/pull/7#issuecomment-1636539509 (maybe an even better approach exists). There is also a link to the paper which describes useful ideas for reaching n=16.

VladimirFokow avatar Jul 15 '23 02:07 VladimirFokow