dedalus
dedalus copied to clipboard
Should flow_tools.GlobalArrayReducer take the dtype from the problem domain, rather than always np.float64?
Hi all,
I'm extremely unsure about this, so I hope this is the appropriate venue for gently pointing this out (a pull request seems to imply more confidence than I currently have). I'm working with an IVP on an np.complex128
domain, and I believe I'm running into issues that relate to flow_tools assuming that I'm working on an np.float64
domain. Would it make sense for GlobalArrayReducer
, when it is called by GlobalFlowProperty.__init__()
, to be given the dtype of the domain?
Best, Adrian
Yes, good catch, I think you're exactly right -- solver.domain.grid_dtype
should be passed when the GlobalArrayReducer
is built. Would you like to put in a PR for that fix?
Thanks for confirming. The last time I tried to install from source/github on my laptop, I eventually figured it out but ran into a lot of hangups. I'll give it a shot -- I've been meaning to learn how to contribute to open source projects via git so this seems like a good opportunity. But if I can't figure it out soon I'll let you know in case it's much easier for you to implement this.
I've tested this, but now calling GlobalFlowProperty.max
throws the following error when it ends up calling GlobalArrayReducer.reduce_scalar
:
Traceback (most recent call last): File "/Users/adfraser/PycharmProjects/dedalus_dev/planar_Kolmogorov_linear.py", line 132, in KH_EVP last_KE = flow.max('KE') File "/Users/adfraser/PycharmProjects/dedalus_dev/dedalus/dedalus/extras/flow_tools.py", line 112, in max return self.reducer.global_max(gdata) File "/Users/adfraser/PycharmProjects/dedalus_dev/dedalus/dedalus/extras/flow_tools.py", line 55, in global_max return self.reduce_scalar(local_max, MPI.MAX) File "/Users/adfraser/PycharmProjects/dedalus_dev/dedalus/dedalus/extras/flow_tools.py", line 38, in reduce_scalar self.comm.Allreduce(MPI.IN_PLACE, self._scalar_buffer, op=mpi_reduce_op) File "mpi4py/MPI/Comm.pyx", line 714, in mpi4py.MPI.Comm.Allreduce mpi4py.MPI.Exception: MPI_ERR_OP: invalid reduce operation
This error is being thrown for an IVP with an np.complex128 domain, where my FlowProperty is an energy: flow.add_property("integ(abs(u*u) + abs(v*v) + abs(w*w), 'x')", name='KE')
(it's a 1D domain, so x
is the only dimension -- I'm happy to clean up a minimal example if others would like), so it should be an np.float64.
I suspect this stems from the MPI_MAX allowed types not including complex numbers (which makes sense, of course, since a > b
doesn't make sense if a and b are complex), and GlobalArrayReducer._scalar_buffer
is now complex even if the particular flow property being maximized is real. I only have a very rough idea of what comm.Allreduce
does, and am not sure what _scalar_buffer
is, so please correct me if I'm mis-diagnosing the issue.
I think there's a few solutions to this, so I wanted to touch base here before tinkering further.
Previously, I had been creating a single instance of GlobalFlowProperty
, flow = flow_tools.GlobalFlowProperty
, for a given problem, and then calling flow.add_property
for each property I'm interested in evaluating over time. I wonder if one way forward would be to instead build multiple instances of GlobalFlowProperty
, and allow a dtype to be passed by the user, so that you could separate your FlowProperties into complex128 types, matching your grid, and float64, which would then allow you to use MPI_MAX. Of course, this is also probably a very narrow edge case.
Sorry I missed this update! Hmm, this is tricky -- in a complex domain, Dedalus will treat everything (including fields that are the output of the abs()
operator) as complex. I think the best method might be to have the min
and max
methods on GlobalFlowProperty
simply fail if the grid dtype is complex, and then we can add a new max_abs
method that takes the absolute value of the property gdata before reducing with float. Let me know if you'd like to give this a shot or I can try to get it in soon, too.