pip-tools icon indicating copy to clipboard operation
pip-tools copied to clipboard

🐛 Fix collecting deps for all extras in multiple input packages

Open dragly opened this issue 1 year ago • 12 comments

Previously, an error would always be thrown when running compile with --all-extras on multiple source files containing extras. The reason was that the first iteration over the first source file would fill the extras variable, which in the second iteration would trigger an error since both all_extras and extras would be set.

This change moves the check outside the loop and early in the script to avoid unnecessary processing before throwing the error.

Further, only moving the check outside the loop would currently leave us with only the extras from the last src_file in the iteration over src_files.

This change also ensures all extras are in fact collected from all packages when --all-extras is passed as an argument.

This fixes #1980

Contributor checklist
  • [ ] Included tests for the changes. (Happy to add a test, but not sure which type is preferred).
  • [x] PR title is short, clear, and ready to be included in the user-facing changelog.
Maintainer checklist
  • [x] Verified one of these labels is present: backwards incompatible, feature, enhancement, deprecation, bug, dependency, docs or skip-changelog as they determine changelog listing.
  • [ ] Assign the PR to an existing or new milestone for the target version (following Semantic Versioning).

dragly avatar Sep 04 '23 17:09 dragly

Thanks for the fix! Could you please add a test that confirms the bug?

Yes 👍

How do you prefer the test to be implemented? I already have an example setup that fails when invoking from the command line. Should I add a test that runs from the command line or rather implement an internal test?

dragly avatar Sep 26 '23 17:09 dragly

@dragly I'd prefer a test in test_cli_compile.py. Use test_cli_compile_strip_extras as a reference.

atugushev avatar Sep 26 '23 19:09 atugushev

I have added a test in a fixup commit now. The succeeds now and will fail if you revert the first commit.

By the way, I struggled a bit while developing this test because pytest consumed all output written to stdout and stderr. The usual pytest -s and pytest -rx flags did not work. How do you usually debug any errors when working on the tests?

dragly avatar Oct 04 '23 20:10 dragly

I also see that there is a merge conflict now with tests/test_cli_compile.py. I can resolve that. Do you prefer that I do it in a merge commit or that I rebase this branch on main right away?

dragly avatar Oct 04 '23 20:10 dragly

By the way, I struggled a bit while developing this test because pytest consumed all output written to stdout and stderr. The usual pytest -s and pytest -rx flags did not work. How do you usually debug any errors when working on the tests?

pytest --verbose doesn't work?

I also see that there is a merge conflict now with tests/test_cli_compile.py. I can resolve that. Do you prefer that I do it in a merge commit or that I rebase this branch on main right away?

I guess that's unimportant.

chrysle avatar Oct 10 '23 16:10 chrysle

By the way, I struggled a bit while developing this test because pytest consumed all output written to stdout and stderr. The usual pytest -s and pytest -rx flags did not work. How do you usually debug any errors when working on the tests?

pytest --verbose doesn't work?

Weirdly enough I see the output now. Not sure why it did not work earlier :thinking:

dragly avatar Nov 10 '23 14:11 dragly

@dragly I took a stab at updating the title — it should refer to one high-level effect, not the implementation details. The details/motivation are to be in the description if needed. Otherwise, it just looks like a non-atomic collection of loosely related changes being enumerated.

Do you think this is accurate enough?

webknjaz avatar Jan 29 '24 12:01 webknjaz

@dragly I took a stab at updating the title — it should refer to one high-level effect, not the implementation details. The details/motivation are to be in the description if needed. Otherwise, it just looks like a non-atomic collection of loosely related changes being enumerated.

Do you think this is accurate enough?

Yes, that looks perfect! Thanks for updating it :) I agree with the rationale too :+1:

dragly avatar Jan 30 '24 12:01 dragly

@dragly thanks! I noticed, there's a coverage drop in this PR. Could you check it out and compensate for the lost lines? Especially, pay attention to the indirect changes tab in Codecov: https://app.codecov.io/gh/jazzband/pip-tools/pull/1981/indirect-changes.

webknjaz avatar Jan 30 '24 14:01 webknjaz

Can't explain the test failure. It's about a missing PKG-INFO file in the build directory, which is obviously present:

  running install
  running install_egg_info
  running egg_info
  writing fake_with_deps.egg-info/PKG-INFO
  writing dependency_links to fake_with_deps.egg-info/dependency_links.txt
  writing requirements to fake_with_deps.egg-info/requires.txt
  writing top-level names to fake_with_deps.egg-info/top_level.txt
  reading manifest file 'fake_with_deps.egg-info/SOURCES.txt'
  writing manifest file 'fake_with_deps.egg-info/SOURCES.txt'
  Copying fake_with_deps.egg-info to build/bdist.linux-x86_64/wheel/fake_with_deps-0.1-py3.8.egg-info
  running install_scripts
  error: [Errno 2] No such file or directory: 'build/bdist.linux-x86_64/wheel/fake_with_deps-0.1-py3.8.egg-info/PKG-INFO'
  error: subprocess-exited-with-error

chrysle avatar Jan 30 '24 15:01 chrysle

Can't explain the test failure. It's about a missing PKG-INFO file in the build directory, which is obviously present:

Could be an incomplete build deps pin (as in setuptools updated). I suppose using what #1681 addresses in our own tests could make things more stable. Dogfooding FTW!

webknjaz avatar Jan 31 '24 02:01 webknjaz

@dragly thanks! I noticed, there's a coverage drop in this PR. Could you check it out and compensate for the lost lines? Especially, pay attention to the indirect changes tab in Codecov: https://app.codecov.io/gh/jazzband/pip-tools/pull/1981/indirect-changes.

I will try, but have to admit I am not too familiar with reading Codecov reports.

I looked at the page you linked and the documentation for indirect changes. It seems like it is not hitting a few conditional imports based on Python version.

Is it possible that Python or pip versions change between this PR and the previous run that codecov compares to?

Do you have other pointers on how you typically dig into a codecov change like this?

dragly avatar Jan 31 '24 08:01 dragly

Thank you!

chrysle avatar Feb 21 '24 18:02 chrysle

Is it possible that Python or pip versions change between this PR and the previous run that codecov compares to?

Do you have other pointers on how you typically dig into a codecov change like this?

Sorry, this slipped my attention. If it's conditional imports, depending on the Python version, maybe some of the CI jobs failed to upload coverage (the codecov service can be flaky like that).

webknjaz avatar Feb 22 '24 01:02 webknjaz