What is the defined way of running <basic swarm>.add_particles_with_global_coordinates() in parallel?
Suppose we add three particles using add_particles_with_global_coordinates():
xy_st = np.array([[0.5, 0.33], [1.5, 0.33], [1.5, 0.50]])
basic_swarm.add_particles_with_coordinates(xy_st, migrate = True)
In serial, this is not a problem.
In parallel, say N ranks, should we split xy_st into N groups before calling add_particles_with_coordinates?
Currently, not splitting xy_st and running it in parallel results to the particles being duplicated in each rank.
This method is designed to add particles without checking that the points are local. If we have pre-computed points across the domain, the add_particles_with_coordinates would be a better choice because it would ensure no unintended duplication.
We can use it for things like the global evaluation of values anywhere in the domain. That particular use case might legitimately create duplicate points and we would need to honour them.
We might also use this for transferring mesh values from one mesh to a differently-distributed, adapted mesh. In this case we don't expect any duplicate points because of the way they were derived, but it's exactly the same global-evaluation as above.
If this is super-confusing, then perhaps it should be a hidden method that is only for use internally by things like the global-evaluation tools.