nextflow icon indicating copy to clipboard operation
nextflow copied to clipboard

Workflow params (part 1)

Open bentsherman opened this issue 7 months ago • 26 comments

Related to #4669

This PR is the first step towards defining workflow params entirely in the pipeline script. It allows you to define a params block like so:

params {
  input
  save_intermeds = false
}

workflow {
  // ...
}

Instead of assigning individual params. The advantage is that Nextflow can validate params natively once the params block is defined, because it guarantees that all params are declared in one place.

There is still some work required to make the validation work, but the high-level flow is:

  1. User specifies params on the command line / params file
  2. Config files can override script params or define "config params" which are only used by the config
  3. When the params block is defined in the script, config params are ignored and only overrides from the command line / config are applied. If a script param was not specified and has no default value, an error is reported. If a CLI param was not already defined in the config or script, an error is reported

TODO:

  • [x] Separate config params from CLI params to identify invalid params (i.e. params that weren't declared in the script or config)

bentsherman avatar Mar 31 '25 16:03 bentsherman

Deploy Preview for nextflow-docs-staging ready!

Name Link
Latest commit d6496c9c186202cbc1d6e8354ac0800fa5eb9be0
Latest deploy log https://app.netlify.com/projects/nextflow-docs-staging/deploys/68ac872c77040f0008a4d334
Deploy Preview https://deploy-preview-5929--nextflow-docs-staging.netlify.app
Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

netlify[bot] avatar Mar 31 '25 16:03 netlify[bot]

Regarding the schema generation, there are two approaches we could take:

  1. Declare only the param name / type / default value in the script, put everything else in an auxiliary JSON/YAML file
  2. Put everything in the script that is used for the schema -- description, icon, form validation rules, section headers, etc

I was initially leaning towards (2), because it would simplify the schema generation and we could validate the available schema properties. But this would also make the params definition really long and verbose in the script, whereas the pipeline code only cares about the name / type / default value.

So now I'm starting to lean towards (1). In that case we could have a really concise definition:

params {
  input: Path
  save_intermeds: boolean = false
}

workflow {
  println "input = ${params.input}"
  println "save_intermeds = ${params.save_intermeds}"
}

Or even directly in the entry workflow:

workflow {
  params:
  input: Path
  save_intermeds: boolean = false

  main:
  println "input = ${input}"
  println "save_intermeds = ${save_intermeds}"
}

This concise syntax will work only if we're certain we don't need anything else in the script. I thought maybe the help text would be useful for CLI help, but that could be provided through a Javadoc comment

In this case, the schema generation would look something like this:

  1. Run nextflow schema to initialize a bare-bones JSON schema from the params definition
  2. Populate the schema with extra information (help text, icons)
  3. Run nextflow schema periodically to update the JSON schema from the params definition, overwriting fields like name / type / schema (perhaps with an appropriate warning)

bentsherman avatar Mar 31 '25 16:03 bentsherman

This concise syntax will work only if we're certain we don't need anything else in the script.

I'm not entirely sure that this is the case, the schema is used for validation of more than just type. I know that some of these things can be handled with Records (eg. enum choices), but what about things like pattern, min/max and uniqueItems etc?

I thought maybe the help text would be useful for CLI help, but that could be provided through a Javadoc comment

I'd be curious to see how this might look - ideally for both description and helptext in one.

ewels avatar Apr 01 '25 15:04 ewels

I know that some of these things can be handled with Records (eg. enum choices), but what about things like pattern, min/max and uniqueItems etc?

Any kind of validation should be possible through custom types and constructor functions (functions that create the custom type and just implements the validation logic). But not all of those cases can be automatically translated to the schema.

For example, I can automatically generate a pattern from a type definition, but not things like min and max. Unless we did something crazy like Min<0, Max<Integer, 10>> 😅

I could go either way at this point. I like the concise syntax of declaring params in the entry workflow, but CLI libraries like argparse are also pretty standard, so maybe the concise syntax is just too restrictive

I'd be curious to see how this might look - ideally for both description and helptext in one.

Copying your example from our slack convo:

workflow {
  params:
  /**
   * Path to comma-separated file containing information about the samples in the experiment.
   *
   * You will need to create a design file with information about the samples in your experiment
   * before running the pipeline. Use this parameter to specify its location.
   * It has to be a comma-separated file with 4 columns, and a header row.
   * See [usage docs](https://nf-co.re/rnaseq/usage#samplesheet-input).
   */
  input: Path

  /**
   * If generated by the pipeline save the STAR index in the results directory.
   *
   * If an alignment index is generated by the pipeline use this parameter
   * to save it to your results folder.
   * These can then be used for future pipeline runs, reducing processing times.
   */
  save_reference: boolean = false

  main:
  // ..
}

bentsherman avatar Apr 01 '25 16:04 bentsherman

Using the Javadoc comment is "better" in the sense that you only need to parse the script to produce the CLI help, you don't have to execute it, which i think would be excessive

bentsherman avatar Apr 01 '25 16:04 bentsherman

See also Pydantic: https://docs.pydantic.dev/latest/concepts/fields/#validate-default-values

Simple param:

myparam: String = "default-value"

Full param:

myparam: String = Field(default: "default-value", pattern: "/some.*regex/")

bentsherman avatar Apr 03 '25 15:04 bentsherman

Declaring params in the entry workflow means that you don't need the params. prefix anymore:

workflow {
  params:
  input: Path
  save_intermeds: boolean = false

  main:
  println "input = ${input}"
  println "save_intermeds = ${save_intermeds}"
}

On the one hand I like that it makes params more like workflow takes. On the other hand, you still need the params. prefix to use params in the config, so I fear that the end result would just be more confusing?

That would suggest that the params block is needed just for consistency with the config. Maybe we could allow the short and long forms like Pydantic:

params {
  input: Path {
    description '...'
    pattern '*.csv'
  }
  save_intermeds: boolean = false
}

workflow {
  println "input = ${params.input}"
  println "save_intermeds = ${params.save_intermeds}"
}

Though I always hesitate to add shortcuts if it makes the code less consistent

bentsherman avatar Apr 03 '25 15:04 bentsherman

+1 for the separate params block, I feel like for consistency that is easier to read and understand, also avoids confusion with take, which is doing quite a similar thing.

Not sure about the squiggly bracket syntax. I like the thinking, but it means that we now have three different types of syntax for them. Nothing for config, : for types and = for variables / others. I can see that being really annoying.

That said, the Field() syntax can be confusing in its own ways, see the Pydantic docs:

Using the f: <type> = Field(...) form can be confusing and might trick users into thinking f has a default value, while in reality it is still required.

ewels avatar Apr 03 '25 15:04 ewels

Yeah, confusing the Field() with a default value is a serious drawback

The block syntax appeals to me because it is consistent with workflow outputs:

// fetchngs...
outputs {
  samples: Channel<Sample> {
    path '...'
    // ...
  }
}

// rnaseq...
params {
  input: List<Sample> {
    // ...
  }
}

Since we want to be able to match outputs to inputs for pipeline chaining, it makes sense to me that the syntax for inputs and outputs mirror each other.

The config is another issue. Let me think through that and write a separate comment...

bentsherman avatar Apr 03 '25 15:04 bentsherman

We could add the same params block syntax to config files if consistency is an issue. But I fear this might feel too "weird" in a configuration context.

The nice thing about config params is that they basically have to be simple values (numbers, strings, booleans, etc). So being able to declare the type isn't so important because it can be inferred from the default value.

Meanwhile, if we take the hybrid approach of generating a skeleton schema that the user can annotate manually as needed, we don't need to add new syntax to the config file to support things like validation and help text, because those can just be defined in the JSON schema

bentsherman avatar Apr 03 '25 16:04 bentsherman

I really like this syntax proposal a lot:

params {
  input: Path {
    description '...'
    pattern '*.csv'
  }
  save_intermeds: boolean = false
}

This is likely in part because it mirrors Python/Pydantic but I think that's a good thing for us to emulate. Imitating the syntax of the most popular typed python extension will make it easier for folks to learn and feel like they can read and understand.

kenibrewer avatar Apr 03 '25 16:04 kenibrewer

If we're leaning more towards that syntax, we could arguably have all JSON schema parameters covered. I think we need to make sure we're crystal clear on what we want to support.

For example, maybe no description as that's "decorative" and too verbose, so the example above becomes simply:

params {
  input: Path { pattern '*.csv' }
  save_intermeds: boolean = false
}

ewels avatar Apr 03 '25 18:04 ewels

@bentsherman - any ideas how the schema builder might be able to reach into Nextflow code to update these definitions based on changes made in the GUI / JSON?

ewels avatar Apr 08 '25 12:04 ewels

I don't think that will be possible. I think the flow will have to be:

  1. Write params definition in main script
  2. Generate skeleton JSON schema
  3. Extend the schema by hand or via CLI wizard or upload to schema builder
  4. Don't edit things that are sourced from the main script

Or if you use the schema builder from scratch, you have to update the params definition by hand (or not use it at all)

bentsherman avatar Apr 08 '25 12:04 bentsherman

I am just thinking of the following user story: I set the type of a parameter to boolean in the parameter defintion. While writing the param definition in the schema builder I see that the tool actually accepts 0,1, and 2 as values. In the current version of the builder I switch the parameter type in the GUI to integer and set a min and a max value. In the currently proposed setup, I would need to go back to the nextflow code, export a new schema and open the new schema in the builder. not optimal imo.

mashehu avatar Apr 08 '25 13:04 mashehu

The kind of discovery work you describe (i.e. figuring out the appropriate type of a param) is exactly the kind of thing that needs to happen in the pipeline code, so that the Nextflow compiler can verify param names and types as they are used in the entry workflow. You can't get that kind of validation in the schema builder.

Instead, the schema builder should be used to annotate a fixed set of params with things like help text, icons, form validation rules, etc. It should be primarily concerned with how params are accessed by external systems.

bentsherman avatar Apr 08 '25 15:04 bentsherman

I agree that the logic / definition should be in the pipeline code: at least, that should be the source of truth. I was mostly wondering if we could have some way to update the Nextflow code from the JSON schema builder, to have the best of both worlds. The schema builder has some nice beginner-friendly functionality in it, for example a GUI with a built-in regex tester for writing patterns, and a bunch of built-in help text.

Maybe this is something that we could do with @reftrace ? eg. From the Python CLI that launches the schema builder GUI, then go back and access the Nextflow code to edit it in place. Not sure if that's possible.

Or if we launch the schema GUI editor from Nextflow itself (with a local server etc) could there be a callback which is able to edit the Nextflow code? 🤔 We would know the param name and attributes..

ewels avatar Apr 09 '25 08:04 ewels

The furthest I would go is to generate a code block in the schema builder that the user can copy into their Nextflow pipeline if they want.. Automatically updating code from an external source is an anti-pattern in my view

bentsherman avatar Apr 09 '25 13:04 bentsherman

Suggestion from call: Would be nice to support single-line comments (//) in addition to Javadoc multi-line comments. This drops the number of lines per-param from 4 to 2, which makes quite a bit of difference if there are a lot of parameters.

ewels avatar Apr 24 '25 14:04 ewels

I'm a bit concerned about this:

Config files can override script params or define "config params" which are only used by the config

Maybe I'm misunderstanding, but I don't config params should be managed differently from script params

Config params are still added to the script as before:

https://github.com/nextflow-io/nextflow/blob/0f799994e2d76d307b51c4136b05fcfb0664200f/modules/nextflow/src/main/groovy/nextflow/Session.groovy#L447-L450

However, if you use the new params block, in the script you will only be allowed to use params that you declared in the params block:

https://github.com/nextflow-io/nextflow/blob/0f799994e2d76d307b51c4136b05fcfb0664200f/modules/nextflow/src/main/groovy/nextflow/script/ParamsDsl.groovy#L53-L69

So you can still override script params from the config, but you can't declare a new param from the config and use it in the script, without also declaring it in the params block. Again, this only applies if you use the new params block

bentsherman avatar May 02 '25 13:05 bentsherman

I believe it would be better continuing iterating this before release it as preview

pditommaso avatar May 05 '25 13:05 pditommaso

From our recent discussions, we will introduce workflow params in 25.10 as a complete feature (this iteration + type annotations + JSON schema integration)

I have updated the PR accordingly:

  • remove the feature flag (will be introduced in 25.10 as a complete feature)
  • remove support for v1 parser (incentivize adoption of strict syntax)

I would like to go ahead and merge this part so that I can use it in other efforts around static types. The type annotations and JSON schema integrations will need to be separate PRs.

bentsherman avatar May 10 '25 02:05 bentsherman

Just coming across this PR. Can I ask what the intention is for type-coercion of params (particularly if "custom" types are supported)?

Is the idea that it will be built in to nextflow that a Path is automatically converted from a string? Would this functionality also handle the exists checks and checking dir/path/glob as we currently support in the schema?

It feels like we are trending towards "pydantic for nextflow" which I love the idea of but I'm not clear quite how much of the schema functionality is expected to be replaced here?

awgymer avatar May 14 '25 15:05 awgymer

That is the long-term goal, to absorb much of the validation logic currently done by nf-schema, and to use type annotations in Nextflow code instead of writing JSON schema (the JSON schema will be generated from Nextflow code instead).

But this PR is just adding the basic params block, no types yet. Adding the types and validation will be an effort unto itself...

bentsherman avatar May 14 '25 16:05 bentsherman

  • remove support for v1 parser (incentivize adoption of strict syntax)

Does this mean that all pipelines must be updated to pass strict syntax before October? (Assuming running on latest Nextflow)

ewels avatar May 14 '25 20:05 ewels

That just means that the new params block will only be available with the strict syntax, but the v1 parser will still be available.

bentsherman avatar May 14 '25 20:05 bentsherman

Went ahead and added type annotations for params in this PR, since it seemed weird to add the params block without the ability to specify types.

One thing we need to figure out is what to do with the type casting of CLI params. We don't know at this point whether the user is using the new params definition, so I think we would need to either disable it by default or somehow "undo" it during the params validation.

EDIT: Since the params block is only supported by the v2 parser, I just disable the CLI params type casting when v2 parser is enabled.

bentsherman avatar Jul 01 '25 19:07 bentsherman

Docs are awesome.

christopher-hakkaart avatar Jul 04 '25 01:07 christopher-hakkaart

@pditommaso ~~this is ready to merge~~ never mind, another test is failing...

bentsherman avatar Jul 09 '25 15:07 bentsherman

A parameter that doesn't specify a default value is a required param.

Can I specify null as the default value to indicate an optional param ? Something like

params {
  index: Path = null
}

That would be like Optional[Path] in Python

muffato avatar Jul 09 '25 20:07 muffato