Valentin Churavy
Valentin Churavy
I am not sure what should happen here `SharedArray` only works when all processes are on the same machine. Something that is not guaranteed with SLURM.
As I said it is not something that SharedArray can really support, but maybe it could throw an error.
Should hopefully be fixed by #175, but what is the end goal? Would be lovely to have a larger test-case, I suspect the OffsetArrays will shake out a couple of...
Yes that sounds like a usecase I would love to eventually support. We have a project right now that will require doing something similar.
Cleanup is normally performed when a process shuts down on the Compute node, so you are right we could and should do a better job with error handling here.
I strongly favor the single job with multiple processes strategy. If feasible one should allocate the resources before starting Julia. Most cluster systems have a queue associated so multiple Jobs...
> Getting all resources before starting any computation does not always seem feasible to me. That is a very different Job model from the one Distributed.jl expects to use. Right...
> But if it works right now for this type of workflow, should we then take steps to explicitly forbid it? I didn't catch that during the original review of...
Yeah a fix would be very nice. because that makes it quite hard to work with nested data. Is there a workaround?
When i do not import Map at all I run into this compile time error: ``` scala: while compiling: /home/wallnuss/Evo2DSim/src/main/scala/org/vastness/evo2dsim/evolution/SUSEvolution.scala during phase: jvm library version: version 2.10.3 compiler version: version...