GaNDLF
GaNDLF copied to clipboard
Added programmatic pip list
Fixes #863
Proposed Changes
- added a programmatic way to invoke
pip list
(as opposed to submodule call, which is unsafe)
Checklist
- [x]
CONTRIBUTING
guide has been followed. - [x] PR is based on the current GaNDLF master .
- [x] Non-breaking change (does not break existing functionality): provide as many details as possible for any breaking change.
- [x] Function/class source code documentation added/updated (ensure
typing
is used to provide type hints, including and not limited to usingOptional
if a variable has a pre-defined value). - [x] Code has been blacked for style consistency and linting.
- [x] If applicable, version information has been updated in GANDLF/version.py.
- [x] If adding a git submodule, add to list of exceptions for black styling in pyproject.toml file.
- [x] Usage documentation has been updated, if appropriate.
- [x] Tests added or modified to cover the changes; if coverage is reduced, please give explanation.
- [x] If customized dependency installation is required (i.e., a separate
pip install
step is needed for PR to be functional), please ensure it is reflected in all the files that control the CI, namely: python-test.yml, and all docker files [1,2,3].
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 94.62%. Comparing base (
90836cf
) to head (d66a61a
).
Additional details and impacted files
@@ Coverage Diff @@
## master #864 +/- ##
==========================================
+ Coverage 94.60% 94.62% +0.01%
==========================================
Files 160 160
Lines 9450 9465 +15
==========================================
+ Hits 8940 8956 +16
+ Misses 510 509 -1
Flag | Coverage Δ | |
---|---|---|
unittests | 94.62% <100.00%> (+0.01%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Mmmm. One more thought.
The previous debug_info
output was quite concise and that's why useful. Now it would be harder for user to copy it from terminal output. Is it ok?
Mmmm. One more thought. The previous
debug_info
output was quite concise and that's why useful. Now it would be harder for user to copy it from terminal output. Is it ok?
This is a good point. Should we flush the output to a file, gandlf_debug.info
or something?
How do we use it? If we really need list of packages I'd prefer to use stdout & bash pipes instead of hardcoded flushing to file, so user do this:
gandlf_debugInfo >> gandlf_debug_info.log
So user can both (1) output debugInfo to the terminal to see what's happening, and (2) store it to the file for easier usage if needed
That would be totally fine for a user who is conversant with a terminal interface, but for a "regular" user, the file creation might work better, no? Something like this:
python gandlf debug-info
Debug information has been written to "XYZ/gandlf_debug.info"; please either copy and paste this file or its contents to further guide GaNDLF maintainers towards solving your issue.
As you know, I have a personal preference for using tempfile.gettempdir()
for the XYZ
path. 😄
Thoughts?
Something like this:
Yeah, that's a good solution if we need pip versions.
But my question is in other direction: existing solution with printing a short but very informative debug_info looks really great IMO. It's very easy to use: just copy these 10 lines from the terminal and that's all. Both trying to copy 100 lines within terminal scrolling, or trying to copy a file from remote machine, looks like moore complex solutions for the user. So I want to ensure if we do really need a whole list of package versions? If it is needed not always but sometimes, maybe add an additional flag to the command like -v
?
Yup, adding a -v
flag makes perfect sense!
Just to highlight: all solutions look quite similar to me, I don't have a strong preference of one of them:) So I'm just asking and offering alternatives. Any usable solution is ok for me
Yup, adding a
-v
flag makes perfect sense!
Done!
Hey @VukW, could you please help me with this error? I am guessing it is coming from the entrypoint test, but I am unable to figure out what to change...
....
FAILED testing/entrypoints/test_debug_info.py::test_case[case0] - AssertionError: Function was not called with the expected arguments: expected_args={'verbose': True} vs actual_args={'verbose': False}, diff ['verbose']
FAILED testing/entrypoints/test_entrypoints_existence.py::test_command_execution[gandlf_debugInfo --help] - subprocess.CalledProcessError: Command '['gandlf_debugInfo', '--help']' returned non-zero exit status 1.
@scap3yvt Hi man! You need to make three fixes:
- You need to add argparse and a new flag to old_way also, smth like this:
parser.add_argument('--verbose', '-v', default=False, action='store_true')
so whenever you run gandlf_debugInfo
you receive a short output, and when you run gandlf_vebugInfo --verbose
you see a full output. Right now old_way
is calling debug_info()
function without any arguments, while verbose
bool param is required. That's why the second test fails
- in
testing/entrypoints/test_debug_info.py
you need to update the tests. Say we have a followingCliCase
test (example):
CliCase(
should_succeed=True,
new_way_lines=["--foo", "-f"],
old_way_lines=["--foo_old, "-f"],
expected_args={"foo": True},
)
That test means that we can run the following commands:
gandlf debug-info --foo
gandlf debug-info -f
gandlf_debugInfo --foo_old
gandlf_debugInfo -f
and all of them should lead to executing python debug_info()
function with expected parameter of foo=True
.
In original test the new_lines and old_lines are empty, so we are talking about running commands without any params:
gandlf debug-info
gandlf_debugInfo
And in this case we should expect verbose
variable to be False, right?
So this specific test should look like
CliCase(
should_succeed=True,
new_way_lines=[""],
old_way_lines=[""],
expected_args={"verbose": False},
)
- Similarly, as the previous test covers running commands omitting
-v
flag, you need to add a new test that covers the flag:
CliCase(
should_succeed=True,
new_way_lines=["--verbose", "-v"],
old_way_lines=["--verbose", "-v], # you should implement this firstly as said in fix(1)
expected_args={"verbose": True},
)
Thanks a ton for the guidance, @VukW! I updated the code appropriately.
@scap3yvt Hi man! You need to make three fixes:
- You need to add argparse and a new flag to old_way also, smth like this:
parser.add_argument('--verbose', '-v', default=False, action='store_true')
so whenever you run
gandlf_debugInfo
you receive a short output, and when you rungandlf_vebugInfo --verbose
you see a full output. Right nowold_way
is callingdebug_info()
function without any arguments, whileverbose
bool param is required. That's why the second test fails
- in
testing/entrypoints/test_debug_info.py
you need to update the tests. Say we have a followingCliCase
test (example):CliCase( should_succeed=True, new_way_lines=["--foo", "-f"], old_way_lines=["--foo_old, "-f"], expected_args={"foo": True}, )
That test means that we can run the following commands:
gandlf debug-info --foo gandlf debug-info -f gandlf_debugInfo --foo_old gandlf_debugInfo -f
and all of them should lead to executing python
debug_info()
function with expected parameter offoo=True
. In original test the new_lines and old_lines are empty, so we are talking about running commands without any params:gandlf debug-info gandlf_debugInfo
And in this case we should expect
verbose
variable to be False, right? So this specific test should look likeCliCase( should_succeed=True, new_way_lines=[""], old_way_lines=[""], expected_args={"verbose": False}, )
- Similarly, as the previous test covers running commands omitting
-v
flag, you need to add a new test that covers the flag:CliCase( should_succeed=True, new_way_lines=["--verbose", "-v"], old_way_lines=["--verbose", "-v], # you should implement this firstly as said in fix(1) expected_args={"verbose": True}, )
This (or an extended example set) would be an awesome addition for the migration guide (#843)! 😄
Hey @VukW, can you help with the errors I am seeing with the entrypoints [ref]?
FAILED testing/entrypoints/test_debug_info.py::test_case[case0] - TypeError: __init__() got an unexpected keyword argument 'metavar'
FAILED testing/entrypoints/test_debug_info.py::test_case[case1] - TypeError: __init__() got an unexpected keyword argument 'metavar'
FAILED testing/entrypoints/test_entrypoints_existence.py::test_command_execution[gandlf_debugInfo --help] - subprocess.CalledProcessError: Command '['gandlf_debugInfo', '--help']' returned non-zero exit status 1.
Hey @VukW, do you think you could shed some light on the error I am seeing? I am pretty sure it is related to this portion of the test:
CliCase(
should_succeed=True,
new_way_lines=["-v"],
old_way_lines=["-v True"], # <<<<< here
expected_args={"verbose": True},
),
Which results in the following error message:
2024-05-04T13:46:01.2015561Z self = ArgumentParser(prog='GANDLF_DebugInfo', usage=None, description='Generate debugging information for maintainers.\n\nCo.../s44172-023-00066-3', formatter_class=<class 'argparse.RawTextHelpFormatter'>, conflict_handler='error', add_help=True)
2024-05-04T13:46:01.2018136Z status = 2, message = 'GANDLF_DebugInfo: error: unrecognized arguments: True\n'
But I am unable to figure out what I am doing wrong.
Hi @scap3yvt , Say you are testing line
old_lines = [
...,
"--verbose True",
...
]
İt is the same as you are running manually the following command:
gandlf_debugInfo --verbose True
So you can run it to see / debug what happens. Anyway, if you are changing anything it worth to run the according command manually to check if it is successful - or debug any issues if they are, it's much quicker then committing and waiting till ci step is finished:)
In this specific case we expect old commands arg to be a normal flag, so you don't need to pass any value to it ( True
in particular), command should look just like --verbose
. BTW two committs ago (after removing metavar) the tests were already passing
Thanks for your input, @VukW! The tests are passing, but the added lines are not getting covered by the tests:
Any idea why this might be happening?
@scap3yvt This happens when the section of the code is never reached. Enable verbose in tests to reach this part of the code.
@scap3yvt This happens when the section of the code is never reached. Enable verbose in tests to reach this part of the code.
I was assuming that these lines would do the trick but they apparently don't work as expected. Any insight, @VukW?
Hey folks, let me look in more detail a bit later cos I didn't find answer from the first glance.
Actually the _debug_info()
should not be covered at all (at least I don't remember any test covering it in a base branch). But by some reasons base branch shows it is covered, and after PR it is not covered though PR didn't change anything here...
@VukW, is it possible that the CliCase
that should cover the --verbose
option is not getting called?
What CliCase
does is it checks _debug_info()
function was called with verbose
True/False arg. However for that test mocks up the real _debug_info()
function: what is mocked, when and how is checked. So in reality this test is not running original _debug_info
function at all - by design. That's why it's ok why your PR does not cover the function.
What is strange for me is why base branch thinks the function is covered? and why it was changed during PR? As that's why coverage shows you an error "coverage was reduced". So I'd have to look on this. If the function is really covered somehow in the base branch then we'd need to fix this
Thank you for that detailed explanation!
Hey @VukW - any update on this?
Hi @sarthakpati , thanks for reminder, didn't look on it yet. Let me check what's happening a bit later this week
Hi @sarthakpati , thanks for reminder, didn't look on it yet. Let me check what's happening a bit later this week
Checking in!
Hi @sarthakpati , thanks for reminder, didn't look on it yet. Let me check what's happening a bit later this week
Checking in!
Checking in, @VukW!
Hi @sarthakpati , thanks for reminder, didn't look on it yet. Let me check what's happening a bit later this week
Checking in!
Checking in, @VukW!
Checking in @VukW!
Hi @sarthakpati , thanks for reminder, didn't look on it yet. Let me check what's happening a bit later this week
Checking in!
Checking in, @VukW!
Checking in @VukW!
😅 sorry, yes, I'll reach this PR once...