speech-driven-animation icon indicating copy to clipboard operation
speech-driven-animation copied to clipboard

feat: Set up comprehensive Python testing infrastructure with Poetry

Open llbbl opened this issue 6 months ago • 0 comments

Set Up Python Testing Infrastructure

Summary

This PR establishes a comprehensive testing infrastructure for the SDA (Speech-Driven Animations) project using Poetry as the package manager and pytest as the testing framework. The setup provides a solid foundation for developers to write and run tests efficiently.

Changes Made

Package Management Migration

  • Migrated from setup.py to Poetry: Created pyproject.toml with all existing dependencies migrated from setup.py
  • Poetry Configuration: Set up Poetry as the modern Python package manager with proper dependency management
  • Lock File Policy: Configured .gitignore to track poetry.lock for reproducible builds

Testing Framework Setup

  • Testing Dependencies: Added as development dependencies:
    • pytest (^7.4.0) - Core testing framework
    • pytest-cov (^4.1.0) - Coverage reporting
    • pytest-mock (^3.12.0) - Mocking utilities

Testing Configuration

  • Pytest Configuration in pyproject.toml:

    • Strict markers and configuration enforcement
    • Coverage reporting with HTML and XML output formats
    • 80% coverage threshold requirement
    • Custom test markers: unit, integration, slow
    • Test discovery patterns for test_*.py and *_test.py
  • Coverage Configuration:

    • Source tracking for the sda package
    • Exclusions for test files, migrations, and boilerplate code
    • Multiple report formats (terminal, HTML, XML)

Directory Structure

tests/
├── __init__.py
├── conftest.py               # Shared pytest fixtures
├── test_setup_validation.py  # Validation tests
├── unit/
│   └── __init__.py
└── integration/
    └── __init__.py

Test Fixtures (conftest.py)

Created comprehensive fixtures for common testing needs:

  • temp_dir: Temporary directory management
  • sample_audio_data: NumPy array audio data
  • sample_image_data: NumPy array image data
  • sample_tensor: PyTorch tensor samples
  • mock_config: Configuration dictionaries
  • sample_model_weights: Model weight files
  • sample_audio_file: WAV file creation
  • sample_image_file: PNG file creation
  • reset_random_seeds: Automatic seed resetting
  • gpu_available: GPU availability check
  • mock_env_vars: Environment variable mocking
  • capture_logs: Log message capturing

Development Experience

  • Poetry Scripts: Run tests with either:
    • poetry run test
    • poetry run tests
  • Standard pytest options: All pytest CLI options remain available
  • Coverage Reports: Generated in htmlcov/ directory and coverage.xml

Other Changes

  • Updated .gitignore: Added entries for:
    • Testing artifacts (.pytest_cache/, .coverage, htmlcov/, etc.)
    • Virtual environments
    • IDE files
    • Claude-specific directories (.claude/*)
    • Temporary files and OS-specific files

Running Tests

  1. Install dependencies:

    poetry install
    
  2. Run all tests:

    poetry run test
    # or
    poetry run tests
    
  3. Run specific test markers:

    poetry run pytest -m unit        # Run only unit tests
    poetry run pytest -m integration # Run only integration tests
    poetry run pytest -m "not slow"  # Skip slow tests
    
  4. View coverage report:

    # After running tests, open the HTML report
    open htmlcov/index.html  # macOS
    xdg-open htmlcov/index.html  # Linux
    

Notes

  • The validation tests confirm that all testing infrastructure components work correctly
  • Coverage is currently at 0% as no actual source code tests have been written yet
  • The 80% coverage threshold is configured but developers should write tests to meet this requirement
  • All dependencies from the original setup.py have been preserved in the Poetry configuration
  • The poetry.lock file should be committed to ensure reproducible builds across environments

Next Steps

Developers can now:

  1. Write unit tests in the tests/unit/ directory
  2. Write integration tests in the tests/integration/ directory
  3. Use the provided fixtures in conftest.py for common testing scenarios
  4. Run tests locally before committing changes

llbbl avatar Jun 23 '25 23:06 llbbl