Click 8.2.1 CliRunner raises SystemExit(1) when using click.File's stdin support
In Click 8.2.1, when testing the behavior of a command-line which uses click.File's stdin as an argument, CliRunner.invoke() always throws a SystemExit(1) with Aborted ! on stderr.
Seems like a regression introduced by the change in #2934, as this Aborted! is generated when the main Click endpoint raises an EofError.
Reproducer :
#!/usr/bin/env python3
import click
from click.testing import CliRunner
@click.command()
@click.argument("file_input", type=click.File("r"))
def cli(file_input):
for line in file_input.readlines():
print(line.rstrip())
def test_cli():
result = CliRunner().invoke(cli, ["-"], input="test\n")
assert result.exit_code == 0, str(result)
assert result.output == "test\n"
if __name__ == "__main__":
cli()
nicolasd@pythagore:~/tmp/repro$ uv venv -p python3.13 venv
Using CPython 3.13.3 interpreter at: /usr/bin/python3
Creating virtual environment at: venv
Activate with: source venv/bin/activate
nicolasd@pythagore:~/tmp/repro$ . ./venv/bin/activate
(venv) nicolasd@pythagore:~/tmp/repro$ uv pip install pytest Click==8.2.1
Resolved 5 packages in 5ms
Installed 5 packages in 6ms
+ click==8.2.1
+ iniconfig==2.1.0
+ packaging==25.0
+ pluggy==1.6.0
+ pytest==8.3.5
(venv) nicolasd@pythagore:~/tmp/repro$ echo foo | python3 test_repro.py -
foo
(venv) nicolasd@pythagore:~/tmp/repro$ pytest test_repro.py
===================================================================== test session starts =====================================================================
platform linux -- Python 3.13.3, pytest-8.3.5, pluggy-1.6.0
rootdir: /home/nicolasd/tmp/repro
collected 1 item
test_repro.py F [100%]
========================================================================== FAILURES ===========================================================================
__________________________________________________________________________ test_cli ___________________________________________________________________________
def test_cli():
result = CliRunner().invoke(cli, ["-"], input="test\n")
> assert result.exit_code == 0, str(result)
E AssertionError: <Result SystemExit(1)>
E assert 1 == 0
E + where 1 = <Result SystemExit(1)>.exit_code
test_repro.py:16: AssertionError
=================================================================== short test summary info ===================================================================
FAILED test_repro.py::test_cli - AssertionError: <Result SystemExit(1)>
====================================================================== 1 failed in 0.09s ======================================================================
(venv) nicolasd@pythagore:~/tmp/repro$ uv pip install Click==8.2.0
Resolved 1 package in 3ms
Uninstalled 1 package in 1ms
Installed 1 package in 1ms
- click==8.2.1
+ click==8.2.0
(venv) nicolasd@pythagore:~/tmp/repro$ pytest test_repro.py
===================================================================== test session starts =====================================================================
platform linux -- Python 3.13.3, pytest-8.3.5, pluggy-1.6.0
rootdir: /home/nicolasd/tmp/repro
collected 1 item
test_repro.py . [100%]
====================================================================== 1 passed in 0.09s ======================================================================
Environment:
- Python version: 3.13
- Click version: 8.2.1
Happy to review a PR!
This is clearly a regression that was previously covered by tests but the assertion that failed was removed in https://github.com/pallets/click/pull/2934.
That change was made because someone else reported that not raising EOF, and instead blocking forever, did not match how input worked when calling the command in a real terminal. Now when input is exhausted, an error is raised. That seems correct to me, which is why the change was made.
What do you suggest? If I leave it this way you call it a regression, if I leave it the other way they call it a bug. Either someone needs to justify which behavior is more correct, or find a way to satisfy both requests
I think I found a good trade-off solution in https://github.com/pallets/click/pull/2940 that keeps everyone happy.
After further reading of https://github.com/pallets/click/issues/2787 the introduced behavior indeed makes sense to avoid blocking prompts when running tests.
However the picked solution, overriding the iterator protocol of the _NamedTextIOWrapper class, was the root cause of the regression we observed after the upgrade of click to 8.2.1 (see example).
I reworked https://github.com/pallets/click/commit/262bdf02288daf9a37b9e815b44acafd62b5c6fe to rather raise the EOFError exceptions from the CliRunner prompt functions as it fixes the regression while keeping non blocking prompts during tests.
@davidism friendly ping to review and merge PR #2940 fixing that issue.
@kieranyyu who fixed the EOL issue with the CliRunner in https://github.com/pallets/click/commit/262bdf02288daf9a37b9e815b44acafd62b5c6fe already did a first pass (see https://github.com/pallets/click/pull/2940#discussion_r2105687974).
This also breaks click.open_file('-'):
import click
from click.testing import CliRunner
from pytest import fixture
@click.command()
def main():
src = click.open_file('-').readlines()
click.echo(f"Source: {src}")
@fixture
def runner():
return CliRunner()
def test_program(runner):
result = runner.invoke(main, input="1\n2")
print(result.output)
assert result.exit_code == 0
if __name__ == "__main__":
main()
for real usage works fine:
$ echo -ne "1\n2" | python check.py
Source: ['1\n', '2']
$ echo $?
0
but as a test:
$ pytest -svra check.py
=== test session starts ===
platform linux -- Python 3.13.3, pytest-8.4.1, pluggy-1.6.0 -- .../venv/bin/python
cachedir: .pytest_cache
hypothesis profile 'default'
configfile: pyproject.toml
plugins: hypothesis-6.138.13
collected 1 item
check.py::test_program
Aborted!
FAILED
=== FAILURES ===
___ test_program ___
runner = <click.testing.CliRunner object at 0x7fc6cb6b8c20>
def test_program(runner):
result = runner.invoke(main, input="1\n2")
print(result.output)
> assert result.exit_code == 0
E assert 1 == 0
E + where 1 = <Result SystemExit(1)>.exit_code
check.py:21: AssertionError
=== short test summary info ===
FAILED check.py::test_program - assert 1 == 0
=== 1 failed in 0.23s ===