seml
seml copied to clipboard
Feature Request: Pipelines
Desired Behavior
I want to create relationships between jobs. For the beginning, it should suffice to create for each
and once for all
relationships. These connected jobs should result in a Directed Acyclic Graph (DAG) that jobs can be easily executed from root nodes to leaf nodes. If I trigger job A the framework should check if all preceding jobs do exist and if not they should be queued first. The execution order of jobs (e.g. A should start after all preceding jobs finished) can be defined in SLURM via the sbatch
command (Example). Jobs that can be executed in parallel should use the parallelism determined by the SLURM scheduler.
Implementation Suggestion
It would be easiest to use an existing framework to capture these features. Often they also provide a nice frontend etc. However, I feat that the current yaml configuration files are hardly compatible with other existing solutions.
If we decide to extend SEML, I suggest:
- Each job is defined in a separate yaml file (specifying its own running time estimate)
- In the "seml block" dependencies to other jobs can be defined as:
- If the yaml of the current job defines
n
jobs, andpath/to/a.yaml
definesm
jobs, then this results in a total ofm * n
jobs. - If the yaml of the current job defines
n
jobs, andpath/to/b.yaml
definesk
jobs, then this results in a total ofn
jobs. - If the yaml of the current job defines
n
jobs,path/to/a.yaml
definesm
jobs, andpath/to/b.yaml
definesm
jobs (with relationtype offor_each_of
), then this results in a total ofk * m *n
jobs.
- If the yaml of the current job defines
- The run method shall receive a dictionary for its lineage. Based on this information, the user shall be responsible for loading the right artifacts.
Example:
seml:
...
dependencies:
- for_each_of: path/to/a.yaml
- once_for_all_of: path/to/b.yaml
References
Here some references to other pipeline frameworks (mostly for inspiration):