pytest-testrail icon indicating copy to clipboard operation
pytest-testrail copied to clipboard

Run with Test Plan Id and no run id doesn't capture results

Open conversica-aaronpa opened this issue 6 years ago • 10 comments

Describe the bug When I try to get results into an existing Test Plan, the parameters are accepted, but no results are found in the UI.

To Reproduce Create a testrail configuration file with the following valid values:

[API]
url = https://conversica.testrail.io
email = [email protected]
[TESTRUN]
assignedto_id = 2
project_id = 7

Execute a pytest run with command line arguments included with valid values like the following, where 65 is an empty Test Plan with a descriptive name for environment:

--testrail --tr-config=testrail.cfg --tr-password=MyRealPassword --tr-plan-id=65

Output

python3 -m pytest --junitxml logs/out_report.xml --html /logs/out_report.html --variables /config/config.json --testrail --tr-config=testrail.cfg --tr-password=password --tr-plan-id=65
============================= test session starts ==============================
platform linux -- Python 3.6.8, pytest-4.3.0, py-1.8.0, pluggy-0.9.0
pytest-testrail: existing testplan #65 selected
rootdir: /, inifile:
plugins: variables-1.7.1, testrail-2.3.3, metadata-1.8.0, html-1.20.0, cov-2.6.1
collected 60 items                                                             

tests/menu_walk/test_leadmanager_listview.py ....                        [  6%]
tests/menu_walk/test_leadmanager_listview_morefilters.py ....FF..        [ 20%]
tests/menu_walk/test_leadmanager_responseview.py ....                    [ 26%]
tests/menu_walk/test_leadmanager_resview_morefilters.py .......          [ 38%]
tests/menu_walk/test_overview.py ..                                      [ 41%]
tests/menu_walk/test_reporting_assistantactivity.py .....                [ 50%]
tests/menu_walk/test_reporting_conversation.py ....                      [ 56%]
tests/menu_walk/test_reporting_customreports.py .......                  [ 68%]
tests/menu_walk/test_reporting_leadprocess.py .....                      [ 76%]
tests/menu_walk/test_reporting_leadsources.py .....                      [ 85%]
tests/menu_walk/test_reporting_newleads.py .....                         [ 93%]
tests/menu_walk/test_reporting_repperformance.py ....                    [100%][testrail] Start publishing
[testrail] Testcases to publish: 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721
[testrail] Testruns to update: 
[testrail] End publishing

Expected behavior A Test Run with an auto generated name will be added to the Test Plan, much like how a Test Run will be created and populated when no Test Run or Test Plan ID is included

Comment I'm just trying to find a simple flow to allow me to group newly auto-generated Test Runs into a grouping object (Test Plan seems logical) for runs to a given environment. I want to group TRs for QA vs. Stage environments separately. The closest I've found is to include a static TR id for a run with a descriptive name, but that captures results showing only most recent at the aggregated summary level. I think I want a separate run for each actual build server run, but want to group rather than using a naming strategy for example.

conversica-aaronpa avatar Mar 08 '19 22:03 conversica-aaronpa

Accidentally clicked closed, re-opened

conversica-aaronpa avatar Mar 08 '19 22:03 conversica-aaronpa

@conversica-aaronpa I'll take a look at this when I get a chance.

You have left your password in the sample output, I have removed it. Please change your password on testrail asap

allankp avatar Mar 11 '19 15:03 allankp

Whoops, forgot about that echo. Done, thanks.

conversica-aaronpa avatar Mar 11 '19 16:03 conversica-aaronpa

Looking at the pytest-testrail and the testrail API some more, I guess I'm surprised that add_run doesn't have an optional plan id that can be passed in, and to create a run inside a plan it looks like you have to call add_plan_entry in place of add_run. That's more complicated than I'd expected, so perhaps this is actually a feature request. If there is an existing feature (suites? we are currently single suite by default in cloud hosted) that can allow me to organize similar test runs being created on the fly by automated runs into Plans or some other grouping per environment/run-reason that would work.

conversica-aaronpa avatar Mar 11 '19 17:03 conversica-aaronpa

Hello @conversica-aaronpa, options --tr-run-id and --tr-plan-id don't allow to automatically create testrun. Your testplan must contained one or more testruns. Actually, these options work only on existing testplan/run. If you want to create a new testrun, you must not use these options. If you want to create a new testrun into an existing testplan, you're right, it's a new behavior/feature.

apallier avatar Mar 11 '19 17:03 apallier

Yes, the latter is what I'm after, new run in a passed in plan. I'm not sure what else could happen when accepting such parameters, what happens now with no results being logged at all after going through all the motions doesn't seem right. It shouldn't be difficult to add another path in the logic to call add_plan_entry in place of add_run for this scenario. As it stands, this scenario, passing in a test plan id only, seems to lose the results. If I can figure it out I'll make a pull request, but it might take me a while.

conversica-aaronpa avatar Mar 12 '19 05:03 conversica-aaronpa

@conversica-aaronpa OK, feel free to open a pull request.

apallier avatar Mar 12 '19 12:03 apallier

PR made as https://github.com/allankp/pytest-testrail/pull/92, will add example output there.

conversica-aaronpa avatar Mar 12 '19 21:03 conversica-aaronpa

I've encountered this issue due to sheer confusion. You have two parameters that create test runs, and two that update test runs. Unfortunately, plan_id has use cases for both:

create test run:

  • project_id
  • milestone_id

overrides creation and only updates existing test run(s):

  • run_id
  • plan_id

From what I understand, there was an original use case where plan_id would update all test runs that exist under a test plan (that doesn't make any sense to me, but that's how it works).

I think there are two options:

  1. Fix plan_id to follow suit with milestone_id and project_id. This makes logical sense as updating a test run only happens when you explicitly supply the run_id.
  2. Make a new parameter for test run creation under a plan. --tr-testrun-plan-id and config value say... plan_newrun_id This doesn't make logical sense from a naming schema, but it does allow for backward compatability.

clifter1 avatar Sep 09 '21 15:09 clifter1

There is a separate API call for adding a test run to a test plan (add_plan_entry):

https://www.gurock.com/testrail/docs/api/reference/plans#addruntoplanentry

Maybe --tr-testrun-planentry-id and planentry_id as values for creating the test run under a test plan?

clifter1 avatar Sep 09 '21 21:09 clifter1