DynaCam icon indicating copy to clipboard operation
DynaCam copied to clipboard

feat: Set up comprehensive Python testing infrastructure

Open llbbl opened this issue 3 months ago • 0 comments

Python Testing Infrastructure Setup

Summary

This PR sets up a comprehensive testing infrastructure for the DynaCam evaluation and visualization project. The setup provides a robust foundation for writing and running tests with proper coverage reporting, fixtures, and development workflows.

Changes Made

Package Management

  • Added Poetry configuration in pyproject.toml with project metadata
  • Configured Python ^3.9 requirement to ensure compatibility
  • Added core testing dependencies: pytest, pytest-cov, pytest-mock

Testing Configuration

  • pytest configuration with strict markers, verbose output, and comprehensive coverage settings
  • Coverage configuration with 80% threshold, HTML/XML/terminal reporting
  • Custom test markers: unit, integration, slow for test categorization
  • Test discovery patterns for proper test file detection

Directory Structure

tests/
├── __init__.py
├── conftest.py              # Shared fixtures and configuration
├── test_setup_validation.py # Infrastructure validation tests
├── unit/
│   └── __init__.py
└── integration/
    └── __init__.py

Shared Fixtures (tests/conftest.py)

  • temp_dir: Temporary directory management
  • sample_data_dir: Mock data directory structure
  • sample_annotations: Sample annotation data for testing
  • sample_trajectory: Mock trajectory data
  • mock_config: Configuration fixtures
  • numpy_random_seed: Reproducible random number generation
  • Environment setup and session configuration

Development Commands

  • poetry run test - Run all tests with coverage
  • poetry run tests - Alternative test command
  • Standard pytest options available (-v, -k, -m, etc.)

Git Configuration

  • Updated .gitignore with testing artifacts:
    • .pytest_cache/, .coverage, htmlcov/, coverage.xml
    • Claude Code settings (.claude/*)
    • Additional testing artifacts (*traj_vis/, *.npz, *.npy)

Validation

The setup includes comprehensive validation tests that verify:

  • ✅ Pytest is working correctly
  • ✅ Custom fixtures are available and functional
  • ✅ Test markers are properly configured
  • ✅ Project structure is correct
  • ✅ Coverage reporting is configured
  • ✅ All fixtures provide expected data structures

All 14 validation tests pass successfully

Usage Instructions

Running Tests

# Run all tests
poetry run test

# Run with verbose output
poetry run test -v

# Run specific test markers
poetry run test -m unit
poetry run test -m integration

# Run with coverage report
poetry run test --cov-report=html

# Run specific test file
poetry run test tests/test_setup_validation.py

Coverage Reports

  • Terminal: Shows coverage summary after test run
  • HTML: Generated in htmlcov/ directory
  • XML: Generated as coverage.xml for CI/CD integration

Writing New Tests

  1. Create test files in tests/unit/ or tests/integration/
  2. Use available fixtures from conftest.py
  3. Apply appropriate markers: @pytest.mark.unit, @pytest.mark.integration, @pytest.mark.slow
  4. Follow naming conventions: test_*.py, Test* classes, test_* functions

Notes

  • The setup currently includes minimal production dependencies (only numpy) to avoid environment compatibility issues
  • Additional project dependencies can be added to pyproject.toml as needed
  • The poetry.lock file should be committed to ensure reproducible builds
  • Coverage threshold is set to 80% but can be adjusted in pyproject.toml

This testing infrastructure provides a solid foundation for maintaining code quality and enabling test-driven development practices in the project.

llbbl avatar Sep 04 '25 14:09 llbbl