specs icon indicating copy to clipboard operation
specs copied to clipboard

Allow joining over `FlaggedStorage` in parallel

Open torkleyy opened this issue 6 years ago • 16 comments

Problem

We cannot join over a FlaggedStorage as usual because it does not fulfill the DistinctStorage guarantee (since it pushes events when accessed).

Potential solution

We can push the events prior or after the actual access and keep it out of any parallel logic. This seems to be the technically most reasonable solution, but might be very hard to get right, especially with the API we have.

torkleyy avatar Jan 04 '19 16:01 torkleyy

cc @Moxinilian Would you be interested in taking a look?

torkleyy avatar Jan 04 '19 16:01 torkleyy

I think the easiest solution would be to extend the parallel iteration process to call a storage-specific method prior to parallel iteration. This method would pass the joined mask bitset to the storages. We could push all the edit events at that time.

I think the best way to implement this is by extending the Join trait with a pre-process method (breaking change should be avoidable by defaulting it to a no-op). The FlaggedStorage would implement that method by doing the event push logic (and forwarding the call to the wrapped storage).

The Join trait would therefore get a new safe method fn prepare(&mut self, mask: T::Mask) to which the iteration bitset is passed. This method would be implemented for joinable tuples as a call to all prepare of the joinables with the mask. The default implementation should be a no-op. As this is statically determined, LLVM should be able to truncate those no-op calls, ensuring this is zero-cost for non-flagged storages.

The FlaggedStorage wrapper type would do the flagging of entities there using the mask (this could even make flagging more efficient considering it will be an optimizable bitset AND, maybe?). It would implement Join and ParJoin manually. In the immutable implementation of Join, the prepare method will only forward it to the wrapped storage, but in the mutable iteration, it will first register the masked entities as dirty, then forward it to the wrapped storage.

This prepare method would be called here for serial and here for parallel.

Notice that this has the drawback of being inefficient, as the final mutable iteration might have filtered out some of the entities. It seems like the current FlaggedStorage doesn't do it either, so at least for now I believe it is acceptable.

Do you think this would be a reasonable fix?

Moxinilian avatar Jan 04 '19 22:01 Moxinilian

I realize this might come off as flippant but serious question has anyone ever seriously used a parallel join?

Xaeroxe avatar Jan 04 '19 23:01 Xaeroxe

Yes, in the simulation I'm making for my finals.

Moxinilian avatar Jan 04 '19 23:01 Moxinilian

Hi @Xaeroxe, I randomly ran across this issue just now, and yes I have been using parallel joins fairly extensively. Here's a couple examples from a side project of mine:

https://github.com/michaelmelanson/spiking-neural-net/blob/master/src/simulation/models/izhikevich.rs#L44 https://github.com/michaelmelanson/spiking-neural-net/blob/master/src/simulation/systems/synaptic_transmission.rs#L22 https://github.com/michaelmelanson/spiking-neural-net/blob/master/src/simulation/learning/stdp.rs#L33

michaelmelanson avatar Jan 10 '19 16:01 michaelmelanson

That's great to hear, thank you!

Xaeroxe avatar Jan 10 '19 16:01 Xaeroxe

No problem at all. I think it's a bit of a weird use case for Specs, but it's worked worked surprisingly well for me on this project.

michaelmelanson avatar Jan 10 '19 16:01 michaelmelanson

I ran into this issue while trying to parallelize some systems with amethyst's Transform component which happens to have a FlaggedStorage storage.

It'd be nice to support this, but definitely not required

VictorKoenders avatar Jan 30 '19 09:01 VictorKoenders

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Feel free to comment if this is still relevant.

stale[bot] avatar Mar 31 '19 10:03 stale[bot]

Nope nope nope

Moxinilian avatar Mar 31 '19 20:03 Moxinilian

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Feel free to comment if this is still relevant.

stale[bot] avatar May 30 '19 21:05 stale[bot]

Another nope. ^.^

OvermindDL1 avatar May 30 '19 21:05 OvermindDL1

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Feel free to comment if this is still relevant.

stale[bot] avatar Jul 29 '19 22:07 stale[bot]

This should probably use a non-stale label or whatever it was called?

OvermindDL1 avatar Jul 29 '19 22:07 OvermindDL1

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Feel free to comment if this is still relevant.

stale[bot] avatar Sep 27 '19 23:09 stale[bot]

Still relevant.

caelunshun avatar Sep 28 '19 03:09 caelunshun