Ryan C
Ryan C
I'm going to start working on a branch for this. Is there any other prerequisite work that should be done before I start developing this?
Dope, will do. I think I'm gonna work on having it available in the backend first, and then tackle wiring it up in the front end, so the back end...
My first step for adding jobs-in-jobs is going to be adjusting/defining the data model, so I was wondering what you thought might be the best way to support a "job"...
also things like stderr and std out etc
I think I will make it a job_name column, because although jobs have IDs, they are not readily available by the dagobah object, as they are stored by name.
so #160 is in, and I'm starting on "expanding" jobs into one giant graph (when the "snapshot" is initialized). I'm thinking I'll need to add a `predecessors(node)` to py-dag in...
let me revise that `downstreams_final` idea. What's actually needed is all nodes in the DAG that have no downstreams. Not sure what a good name for this is, or if...
Here's my first pass at coding it. This is untested, and will not run because I havent implemented some of the aforementioned functions (I decided to call that last one...
Also this is probably obvious, but I did a breadth-first traversal, not sure if there's any advantage over depth first, but its just hat my brain defaulted to
maybe it should be called `end_nodes()` which goes nicely with `ind_nodes()` which are basically the opposite concept (nodes with no incoming edges vs nodes with no outgoing edges)