beets
beets copied to clipboard
Fix failing test: test_nonexistant_db
Description
Fixes #5021
The test_nonexistant_db test in test_ui would attempt to read from stdin under certain conditions (particularly when run directly, i.e. pytest -k nonex test/test_ui.py).
This diff fixes the issue.
note: I didn't really figure out what the root cause of the initial issue was (since the tests could sometimes pass when run with the entire test-suite), but if this works on other folks machines, then hey, good enough I suppose...
Thanks! It's likely this does address the proximate problem, but I think it might be nice to do a little extra homework here to identify the hidden test ordering dependency. In particular, we could do some experiments that would run tests in different orders/subsets to identify which one is causing this test to pass when it doesn't alone; that could identify some other test that is (worrisomely) not resetting its changes to global state.
Sounds good to me;
I think there might be some other hidden problems that reordering will bring out.
When I shuffled the test order around for a trial, the test_tags_not_restored test broke as well as the one I initially opened this issue for:
======================================================================
ERROR: test_nonexistant_db (test_ui.ConfigTest.test_nonexistant_db)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/pjulien/repos/beets/test/test_ui.py", line 940, in test_nonexistant_db
self.run_command("test", lib=None)
File "/Users/pjulien/repos/beets/test/helper.py", line 457, in run_command
beets.ui._raw_main(_convert_args(list(args)), lib)
File "/Users/pjulien/repos/beets/beets/ui/__init__.py", line 1848, in _raw_main
subcommands, plugins, lib = _setup(options, lib)
^^^^^^^^^^^^^^^^^^^^
File "/Users/pjulien/repos/beets/beets/ui/__init__.py", line 1697, in _setup
lib = _open_library(config)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/pjulien/repos/beets/beets/ui/__init__.py", line 1757, in _open_library
_ensure_db_directory_exists(dbpath)
File "/Users/pjulien/repos/beets/beets/ui/__init__.py", line 1751, in _ensure_db_directory_exists
os.makedirs(newpath)
File "<frozen os>", line 215, in makedirs
File "<frozen os>", line 215, in makedirs
File "<frozen os>", line 215, in makedirs
[Previous line repeated 1 more time]
File "<frozen os>", line 225, in makedirs
OSError: [Errno 30] Read-only file system: b'/xxx'
======================================================================
FAIL: test_tags_not_restored (test_importer.ScrubbedImportTest.test_tags_not_restored)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/pjulien/repos/beets/test/test_importer.py", line 307, in test_tags_not_restored
self.assertEqual(imported_file.artist, None)
AssertionError: 'Tag Artist' != None
Unfortunately, I didn't save the test order for this run, but I'll be playing around and learning more about the beets test suite ordering to see what's going on...
Also, I'm using python3 test/testall.py instead of tox to run all the tests; would that make any difference, or everything should be the same?
I do plan to do a test overhaul 'soon'. I'll keep this in mind.
I do plan to do a test overhaul 'soon'. I'll keep this in mind.
What do you have in mind for the overhaul (how big is the scope), and why is it needed?
I only have ~5 hours a week to contribute, but I'm interested in helping; seems like the perfect opportunity for me to learn more about the codebase
(whoops, accidentally closed, reopening until I finish fixing the bug)
Hey guys, I'm wondering how this test was ever passing if it's supposed to fail on stdin?
Here's the callstack for the failing test:
In all cases I can see, it ends up calling Pythons input function, triggering stdin.
There's a condition in beets/ui/init.py:1744 to skip the user prompt if the path exists, but the test is setting the file path to "/xxx/yyy/not/a/real", and I don't see anything else stopping us from hitting that user prompt
What do you have in mind for the overhaul (how big is the scope), and why is it needed?
A lot of things. We have spotty test coverage with a lot of things not being tested at all. We also have some complicated tests that could be improved with pytest to make setup and teardown easier. I haven't done a full inventory and I'll be doing it piecemeal, file by file. I just need time/energy/motivation.
Hey guys, I'm wondering how this test was ever passing if it's supposed to fail on stdin?
Indeed; I do think that's what's suspicious here… it seems like some kind of input-wrapping state is being held over from some other test and therefore suppressing the failure here. So I could be wrong about this, but I think we actually have two problems:
- This test is broken and it should fail.
- Some other test is leaving beets in a state that suppresses this test's failure.
Does that seem right?