FLAMEGPU2
FLAMEGPU2 copied to clipboard
Message input/output to same list from multiple functions in same layer.
This should be possible.
For shared message output; it requires tracking sharing an offset into the message output buffer (and initially sizing it correctly).
For shared message input; it requires only triggering buildIndex()
once per layer, rather than per agent function.
This would be useful for Primage, where we have multiple cell types all sharing identical messages for cell-cell forces.
Not worth implementing until concurrent functions within layers is operational.
Related: #194 (Throwing exceptions if user attempts to do this)
If implemented this may impact the sorting of message lists to improve performance
Atleast one user has a use-case for this, where the absence of concurrent message output within a layer is negatively harming performance, due to the overheads / serialisation imposed by using multiple layers instead.
See Discussion #1124