boost icon indicating copy to clipboard operation
boost copied to clipboard

Select deals for Publish Storage Deals batch based on deal sizes

Open LexLuthr opened this issue 2 years ago • 1 comments

Checklist

  • [X] This is not a new feature or an enhancement to the Filecoin protocol. If it is, please open an FIP issue.
  • [X] This is not brainstorming ideas. If you have an idea you'd like to discuss, please open a new discussion on the Boost forum and select the category as Ideas.
  • [X] I have a specific, actionable, and well motivated feature request to propose.

Boost component

  • [X] boost daemon - storage providers
  • [ ] boost client
  • [ ] boost UI
  • [ ] boost data-transfer
  • [ ] boost index-provider
  • [ ] Other

What is the motivation behind this feature request? Is your feature request related to a problem? Please describe.

ClientA - sends 2G car files ClientB - sends 8G car files ClientC - sends 32G car files

In case of varying deal sizes, batching based on number deals might be inefficient for the sealing pipeline. Imagine, I am waiting for 8 deals to Publish and all 8 deals are 32 GiB. Let's say my sealing pipeline can handle only 3 PC1 at once. Then it will get overwhelmed by 8 PC1 requests. On the other hand, if I have 8 deals of size 2G then my pipeline is just sitting empty.

Describe the solution you'd like

Allowing PSD to happen based on deal size along with number of deals will create a more finer balance for deal processing. This would allow SPs with smaller sealing pipelines to streamline their workload more efficiently.

Describe alternatives you've considered

No response

Additional context

No response

LexLuthr avatar Mar 20 '23 16:03 LexLuthr

Something we should think about when this is picked up, is if/how we want to handle publishing on overflow.

For example, let's say I can handle 2 PC1's at 32GiB each for a total of 64GiB:

  • If I get 30GiB worth of deals, and the next deal is 16GiB should all deals then get published, or should we attempt to "pack" as efficiently as possible for 32GiB and wait to publish the rest? This adds complexity but could help make the pipeline more efficient.

jacobheun avatar Mar 22 '23 13:03 jacobheun