pixi icon indicating copy to clipboard operation
pixi copied to clipboard

Variants are not resolved in `package.run-dependencies`

Open ruben-arts opened this issue 5 months ago • 6 comments

Checks

  • [x] I have checked that this issue has not already been reported.

  • [x] I have confirmed this bug exists on the latest version of pixi, using pixi --version.

Reproducible example

[workspace]
channels = ["https://prefix.dev/conda-forge"]
name = "variant-host-test"
platforms = ["osx-arm64"]
preview = ["pixi-build"]
build-variants = { cuda-version = ["12.8.*", "12.9.*"] }

[dependencies]
self = { path = "."}

[feature.cu128.dependencies]
cuda-version = "12.8.*"

[feature.cu129.dependencies]
cuda-version = "12.9.*"

[environments]
cu128 = { features = ["cu128"] }
cu129 = { features = ["cu129"] }

[package.build]
backend = { name = "pixi-build-cmake", version = "*"}
additional-dependencies = { pixi-build-api-version = { path = "/local/path/pixi-build-backends/recipe/pixi-build-api-version" } } # <<< OVERRIDE THIS WITH A LOCAL BUILD

[package.build.configuration]
# To get the backend params
debug-dir = "build-debug"

[package]
version = "1.3.4"
name = "self"

[package.host-dependencies]
cuda-version = "*"

[package.run-dependencies]
cuda-version = "*"
btop = "*" # <<< ANY DEPENDENCY

If you run the following, you would expect that it creates two different packages that both have a dependency on cuda-version 12.x based on the variants:

pixi build -v

Issue description

There are two issues, possibly linked, there are no run-dependencies in the resulting package file, and there is no extra variant in the pixi.lock.

This is the only entry of self in the pixi.lock:

- conda: .
  name: self
  version: 1.3.4
  build: h8f66132_0
  subdir: osx-arm64
  depends:
  - cuda-version # <<< EXPECTED A VERSION HERE
  - btop
  - libcxx >=20
  input:
    hash: 9d922fc95ace8076026b466b6dc6d7832e78e1942a6d346978f0b829104fd881
    globs: []

And in the conda-meta/self-xxx.json we're missing the dependencies completely:

{
  "arch": "arm64",
  "build": "he56d8ec_0",
  "build_number": 0,
  "depends": [],
  "name": "self",
  "platform": "osx",
  "sha256": "dbe9d18c4ec6b521566bc441c8e3d5f5476c8eae13bc41e531ea3778add4623e",
  "subdir": "osx-arm64",
  "timestamp": 1754902623690,
  "version": "1.3.4",
  "fn": "self-1.3.4-he56d8ec_0.conda",
  "url": "file:///Users/rubenarts/envs/variant-host-test/.pixi/build/pkgs-v0/XZXC2ywqurE/self-1.3.4-osx-arm64-aTVUoBjsy1k/self-1.3.4-he56d8ec_0.conda",
  "channel": null,
  "extracted_package_dir": "/Users/rubenarts/Library/Caches/rattler/cache/pkgs/self-1.3.4-he56d8ec_0",
  "files": [],
  "paths_data": {
    "paths_version": 1,
    "paths": []
  },
  "link": {
    "source": "/Users/rubenarts/Library/Caches/rattler/cache/pkgs/self-1.3.4-he56d8ec_0",
    "type": 1
  }
}

Hunch: This is possibly the result of the cuda-version package having no run_export and the logic of the variants depending on that.

Expected behavior

I expect the dependencies to be in the conda-meta/x.json and there to be two variants in the pixi.lock and the respective environments

ruben-arts avatar Aug 11 '25 09:08 ruben-arts

The way the cuda-version package is used in conda-forge is mostly through variants + Jinja. The variants are not usually automatically applied to the run dependencies (only to host & build), so it's expected that it doesn't get a variant version attached.

There are two fixes that users can use in regular conda recipes:

  • The cuda-version could have a run exports that would append a version / version range from host -> run.
  • One could use ${{ cuda_version }} through Jinja (which is what people do in conda-forge recipes). However, not something we currently support natively with build backends.

wolfv avatar Aug 11 '25 11:08 wolfv

cuda-version by design must not have run_exports. It is used as a version selector for constraining the resolution of build/host/run dependencies. If we run-export it, everything breaks. The constraint that one would expect from run_exports should come from either the CUDA compiler or libraries. See the doc at https://github.com/conda-forge/cuda-version-feedstock/blob/main/recipe/README.md.

@wolfv @ruben-arts I suspect #4139 is a manifestation of this issue. Could you take a look plz? 🙂

leofang avatar Nov 11 '25 23:11 leofang

With a bit more tests, I think this is an issue generic to all packages relying on mutex meta packages such as cuda-version and mpi (and libblas and _openmp_mutex, I guess). It does not matter if I use cuda-version (no run_exports) or cuda-nvcc (with run_exports) as a build variant. Neither works because they rely on cuda-version as a mutex, just like how mpich/openmpi/... are made mutually exclusive to each other through the dependency on mpi. So the fix would require taking the mutex into account to resolve the build variants. I suppose one could take the mpi4py conda recipe and write a pixi.toml that generates multiple mpi4py variants to verify this theory.

leofang avatar Nov 12 '25 00:11 leofang

I suppose one could take the mpi4py conda recipe and write a pixi.toml that generates multiple mpi4py variants to verify this theory.

Still thinking out loud... sorry for message bombs 😛

I built a pixi.toml for mpi4py and verified that it hits an issue (compared to CuPy its env is very simple to resolve, so the error happens at build time (mpi.h not found) due to lack of variant dependencies, see the inline comment.

# run this with "pixi install -e py313-mpich -vv", "pixi install -e py314-openmpi -vv", ...

[workspace]
channels = ["conda-forge"]
platforms = ["linux-64"]#, "osx-64", "osx-arm64", "win-64"]
preview = ["pixi-build"]

[workspace.build-variants]
mpi = ["* *openmpi*", "* *mpich*"]
python = ["3.14.*", "3.13.*", "3.12.*", "3.11.*"]

[feature.mpich.dependencies]
mpich = "*"
mpi = { version = "*", build = "*mpich*" }

[feature.openmpi.dependencies]
openmpi = "*"
mpi = { version = "*", build = "*openmpi*" }

[feature.py314.dependencies]
python = "3.14.*"

[feature.py313.dependencies]
python = "3.13.*"

[feature.py312.dependencies]
python = "3.12.*"

[feature.py311.dependencies]
python = "3.11.*"

[environments]
py314-mpich = { features = ["mpich", "py314"], solve-group = "py314-mpich" }
py314-openmpi = { features = ["openmpi", "py314"], solve-group = "py314-openmpi" }
py313-mpich = { features = ["mpich", "py313"], solve-group = "py313-mpich" }
py313-openmpi = { features = ["openmpi", "py313"], solve-group = "py313-openmpi" }
py312-mpich = { features = ["mpich", "py312"], solve-group = "py312-mpich" }
py312-openmpi = { features = ["openmpi", "py312"], solve-group = "py312-openmpi" }
py311-mpich = { features = ["mpich", "py311"], solve-group = "py311-mpich" }
py311-openmpi = { features = ["openmpi", "py311"], solve-group = "py311-openmpi" }

[dependencies]
mpi4py = { path = "." }

[package]
name = "mpi4py"
version = "4.1.1"

[package.build]
backend = { name = "pixi-build-python", version = "*" }

[package.build.config]
noarch = false
compilers = ["c"]

[package.host-dependencies]
python = "*"
pip = "*"
setuptools = ">=77"
cython = ">=3.0"
#<mpilib> = "*"  # <---- what "mpilib" should we put here? it's supposed to be either "mpich" or "openmpi"
                 # auto-inserted by pixi from the feature dependencies. Without this we don't have a mpi.h
                 # for the compiler to consume.

[package.run-dependencies]
python = "*"

Oh, both openmpi and mpich have run_exports.

leofang avatar Nov 12 '25 00:11 leofang

(EDIT: updated the above pixi.toml to better reflect the same intention -- having a cartesian product of build variants -- as in the CuPy's case, https://github.com/prefix-dev/pixi/issues/4139#issuecomment-3518989605).

leofang avatar Nov 12 '25 01:11 leofang

Still thinking out loud... sorry for message bombs 😛

# ...
[package.host-dependencies]
# ...
#<mpilib> = "*"  # <---- what "mpilib" should we put here? it's supposed to be either "mpich" or "openmpi"
                 # auto-inserted by pixi from the feature dependencies. Without this we don't have a mpi.h
                 # for the compiler to consume.
# ...

Thinking out loud... Maybe this is the actual bug which has nothing to do with run_exports. pixi simply missed adding the variant dependencies to the host dependencies, and so

  • in the case of CuPy, we have weird version conflicts (example1, example2) because the version constraint from the variant is not taken into account
  • in the case of mpi4py, the MPI library is not installed to the host env, resulting in the build-time error due to missing headers.

In case ambiguous semantics is a concern, perhaps we should deprecate [feature.<feature_name>.dependencies] in favor of [feature.<feature_name>.{build-,host-,run-}dependencies]? This way, once we fix the issue we don't really care if run_exports exists or not, because a pixi user can explicitly add it to the manifest.

leofang avatar Nov 12 '25 01:11 leofang