Tom Close
Tom Close
Lazy fields that point to a specific Job, i.e. instead of a Node, to allow execution of a node with partially resolved state arrays when the upstream hasn't completed yet,...
This will allow much finer grain control over which nodes of a workflow need to be run than `propagate_rerun`, which is kind of useless as if you set it to...
Currently the hashing of generic objects is a bit brittle, for example torch.Tensor objects don't have an empty `__dict__` so they all get hashed to the same value. An option...
Hashing of functions and methods is currently done on their AST as a first pass, and falling back to the byte-code if the source isn't available. However, this doesn't pick...
There is no mention of the auditing and messaging in the docs currently (only a stub section). Would be good to have (especially as I don't know how it works!)
A detailed how to on how to use the nipype2pydra tool to convert existing nipype interfaces to pydra. This could live on separate nipype2pydra docs but might make more sense...
### What would you like changed/added and why? The `container_ndim` (formerly `cont_dim`) be replaced by `split_dims`. Where as container_ndim only allows you to specify the number of outer dimensions to...
In addition to the `_error.pklz` that is created when a task/workflow errors, it would improve Pydra's user friendliness if there a Jupyter notebook was generated (along with bash/batch launcher that...
Need to update the nipype2pydra package to produce the new syntax
Not only will this help when debugging crash dumps by providing a way to link the workflow directory with the node caches, we can also use this as a fallback...