h5spark icon indicating copy to clipboard operation
h5spark copied to clipboard

Load Balancer in h5spark

Open valiantljk opened this issue 8 years ago • 2 comments

When loading multiple files, the file size can have a long-tailed distribution(see the figure), or an even distribution. In case of even distribution, we don't need to balance the load. But in case of long-tailed or other types of distribution, we do need to design a nice load balancer. Which will at least issue two major calls:

  1. Profile the files sizes and represent the distribution in an RDD
  2. Call the H5Spark's Load Balancer option when performing the h5read

The Load Balancer can be size oriented, or could be metadata-oriented. Currently, we want to implement the disk-size-oriented load balancer, in which each executor will roughly get the even size of data from disks. In the future we may consider the locality-based load balancer or metadata-oriented load balancer.

Take the picture as a motivating real case.
dayabay-muon-pre1

valiantljk avatar Apr 20 '16 08:04 valiantljk

Yeah interesting. I think we could do something like sort the file sizes and then progressively assign the next biggest to the partition with the smallest current load. Sort of related to the LPT algorithm, but using disk-size instead of time. It looks like that might work for this case, but if there are only a few REALLY big files then we might want to do a mixture between multi-file reading and single-file chunked reading.

eracah avatar Apr 20 '16 18:04 eracah

I like the idea of mixture between multi-file and single file reading. That seems to target a more complex file distribution pattern.

valiantljk avatar Apr 20 '16 19:04 valiantljk