specs
specs copied to clipboard
Allow joining over `FlaggedStorage` in parallel
Problem
We cannot join over a FlaggedStorage as usual because it does not fulfill the DistinctStorage guarantee (since it pushes events when accessed).
Potential solution
We can push the events prior or after the actual access and keep it out of any parallel logic. This seems to be the technically most reasonable solution, but might be very hard to get right, especially with the API we have.
cc @Moxinilian Would you be interested in taking a look?
I think the easiest solution would be to extend the parallel iteration process to call a storage-specific method prior to parallel iteration. This method would pass the joined mask bitset to the storages. We could push all the edit events at that time.
I think the best way to implement this is by extending the Join trait with a pre-process method (breaking change should be avoidable by defaulting it to a no-op). The FlaggedStorage would implement that method by doing the event push logic (and forwarding the call to the wrapped storage).
The Join trait would therefore get a new safe method fn prepare(&mut self, mask: T::Mask) to which the iteration bitset is passed. This method would be implemented for joinable tuples as a call to all prepare of the joinables with the mask. The default implementation should be a no-op. As this is statically determined, LLVM should be able to truncate those no-op calls, ensuring this is zero-cost for non-flagged storages.
The FlaggedStorage wrapper type would do the flagging of entities there using the mask (this could even make flagging more efficient considering it will be an optimizable bitset AND, maybe?). It would implement Join and ParJoin manually. In the immutable implementation of Join, the prepare method will only forward it to the wrapped storage, but in the mutable iteration, it will first register the masked entities as dirty, then forward it to the wrapped storage.
This prepare method would be called here for serial and here for parallel.
Notice that this has the drawback of being inefficient, as the final mutable iteration might have filtered out some of the entities. It seems like the current FlaggedStorage doesn't do it either, so at least for now I believe it is acceptable.
Do you think this would be a reasonable fix?
I realize this might come off as flippant but serious question has anyone ever seriously used a parallel join?
Yes, in the simulation I'm making for my finals.
Hi @Xaeroxe, I randomly ran across this issue just now, and yes I have been using parallel joins fairly extensively. Here's a couple examples from a side project of mine:
https://github.com/michaelmelanson/spiking-neural-net/blob/master/src/simulation/models/izhikevich.rs#L44 https://github.com/michaelmelanson/spiking-neural-net/blob/master/src/simulation/systems/synaptic_transmission.rs#L22 https://github.com/michaelmelanson/spiking-neural-net/blob/master/src/simulation/learning/stdp.rs#L33
That's great to hear, thank you!
No problem at all. I think it's a bit of a weird use case for Specs, but it's worked worked surprisingly well for me on this project.
I ran into this issue while trying to parallelize some systems with amethyst's Transform component which happens to have a FlaggedStorage storage.
It'd be nice to support this, but definitely not required
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Feel free to comment if this is still relevant.
Nope nope nope
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Feel free to comment if this is still relevant.
Another nope. ^.^
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Feel free to comment if this is still relevant.
This should probably use a non-stale label or whatever it was called?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Feel free to comment if this is still relevant.
Still relevant.