tabcmd icon indicating copy to clipboard operation
tabcmd copied to clipboard

Replace @pytest.mark.order with fixtures and finalizers for E2E tests

Open Copilot opened this issue 4 months ago • 1 comments

Summary

Refactored E2E tests to use pytest fixtures with finalizers instead of @pytest.mark.order decorators, addressing the issue raised in #366 about managing test order numbers manually.

Problem

The E2E test suites (online_tests.py and language_tests.py) were using @pytest.mark.order decorators with numeric values (1-21) to control test execution order. This approach had several limitations:

  • Required manual management of order numbers across 59 test methods
  • Made it difficult to add new tests without renumbering existing ones
  • No automatic cleanup if tests failed midway
  • Cleanup tests (orders 19-21) had to be manually scheduled at the end
  • Required the pytest-order dependency

Solution

Replaced explicit ordering with pytest fixtures that handle setup and teardown automatically:

Fixtures Created (with finalizers for cleanup)

online_tests.py:

  • login_session() - Auto-runs before all tests
  • site_users(login_session) - Creates/deletes test users
  • test_group(site_users) - Creates/deletes test group
  • test_projects(login_session) - Creates/deletes test projects

language_tests.py:

  • login_all_languages() - Tests login in all languages
  • site_users(login_all_languages) - Creates/deletes test users
  • test_group(site_users) - Creates/deletes test group
  • test_projects(login_all_languages) - Creates/deletes test projects

Example Conversion

Before:

@pytest.mark.order(8)
def test_create_projects(self):
    self._create_project(project_name)
    # ...

@pytest.mark.order(10)
def test_delete_projects(self):
    self._delete_project(project_name)
    # ...

@pytest.mark.order(11)
def test_list_projects(self):
    self._list("projects")

After:

@pytest.fixture(scope="module")
def test_projects(login_session):
    """Create test projects and clean up after."""
    create_project(project_name)
    # ...
    yield
    # Cleanup happens automatically
    delete_project(project_name)
    # ...

def test_list_projects(self, test_projects):
    # Projects automatically created before test runs
    # and deleted after all tests complete
    self._list("projects")

Changes

  • ✅ Removed 59 instances of @pytest.mark.order decorators
  • ✅ Created 8 fixtures (4 per file) with finalizers for automatic cleanup
  • ✅ Updated 51 test methods to use fixture parameters
  • ✅ Removed pytest-order dependency from pyproject.toml

Benefits

  1. Automatic cleanup - Finalizers ensure resources are deleted even if tests fail
  2. Explicit dependencies - Each test declares what it needs via fixture parameters
  3. No manual ordering - No need to assign/manage numeric order values
  4. Scalable - Can add unlimited tests without worrying about renumbering
  5. Standard pattern - Uses pytest's recommended approach for setup/teardown
  6. Better maintainability - Clear separation of setup, test, and teardown phases

Testing

  • All modified test files pass Python syntax validation
  • Fixtures follow pytest best practices with proper scope and dependency chains
  • Cleanup logic preserved from original teardown tests

Fixes #366

Original prompt

This section details on the original issue you should resolve

<issue_title>Does pytest have a teardown concept? Consider using it so these mark orders don't have to get increased if we ever had more than 30 actual tests. Maybe @pytest.fixture with a finalizer?</issue_title> <issue_description>Does pytest have a teardown concept? Consider using it so these mark orders don't have to get increased if we ever had more than 30 actual tests. Maybe @pytest.fixture with a finalizer?

Originally posted by @anyoung-tableau in https://github.com/tableau/tabcmd/pull/366#discussion_r2414530783

Same for setup</issue_description>

Comments on the Issue (you are @copilot in this section)

Fixes tableau/tabcmd#369

💬 Share your feedback on Copilot coding agent for the chance to win a $200 gift card! Click here to start the survey.

Copilot avatar Oct 09 '25 06:10 Copilot

suggestion from Andy: Consider also adding a "before all" setup step that deletes all workbooks and datasources that exist on the site that match the "publishable" name pattern. Currently if you stopped the tests mid-way through execution, they'd never get deleted.

jacalata avatar Oct 09 '25 23:10 jacalata