DynaCam
DynaCam copied to clipboard
feat: Set up comprehensive Python testing infrastructure
Python Testing Infrastructure Setup
Summary
This PR sets up a comprehensive testing infrastructure for the DynaCam evaluation and visualization project. The setup provides a robust foundation for writing and running tests with proper coverage reporting, fixtures, and development workflows.
Changes Made
Package Management
- Added Poetry configuration in
pyproject.tomlwith project metadata - Configured Python ^3.9 requirement to ensure compatibility
- Added core testing dependencies: pytest, pytest-cov, pytest-mock
Testing Configuration
- pytest configuration with strict markers, verbose output, and comprehensive coverage settings
- Coverage configuration with 80% threshold, HTML/XML/terminal reporting
- Custom test markers:
unit,integration,slowfor test categorization - Test discovery patterns for proper test file detection
Directory Structure
tests/
├── __init__.py
├── conftest.py # Shared fixtures and configuration
├── test_setup_validation.py # Infrastructure validation tests
├── unit/
│ └── __init__.py
└── integration/
└── __init__.py
Shared Fixtures (tests/conftest.py)
temp_dir: Temporary directory managementsample_data_dir: Mock data directory structuresample_annotations: Sample annotation data for testingsample_trajectory: Mock trajectory datamock_config: Configuration fixturesnumpy_random_seed: Reproducible random number generation- Environment setup and session configuration
Development Commands
poetry run test- Run all tests with coveragepoetry run tests- Alternative test command- Standard pytest options available (
-v,-k,-m, etc.)
Git Configuration
- Updated
.gitignorewith testing artifacts:.pytest_cache/,.coverage,htmlcov/,coverage.xml- Claude Code settings (
.claude/*) - Additional testing artifacts (
*traj_vis/,*.npz,*.npy)
Validation
The setup includes comprehensive validation tests that verify:
- ✅ Pytest is working correctly
- ✅ Custom fixtures are available and functional
- ✅ Test markers are properly configured
- ✅ Project structure is correct
- ✅ Coverage reporting is configured
- ✅ All fixtures provide expected data structures
All 14 validation tests pass successfully
Usage Instructions
Running Tests
# Run all tests
poetry run test
# Run with verbose output
poetry run test -v
# Run specific test markers
poetry run test -m unit
poetry run test -m integration
# Run with coverage report
poetry run test --cov-report=html
# Run specific test file
poetry run test tests/test_setup_validation.py
Coverage Reports
- Terminal: Shows coverage summary after test run
- HTML: Generated in
htmlcov/directory - XML: Generated as
coverage.xmlfor CI/CD integration
Writing New Tests
- Create test files in
tests/unit/ortests/integration/ - Use available fixtures from
conftest.py - Apply appropriate markers:
@pytest.mark.unit,@pytest.mark.integration,@pytest.mark.slow - Follow naming conventions:
test_*.py,Test*classes,test_*functions
Notes
- The setup currently includes minimal production dependencies (only numpy) to avoid environment compatibility issues
- Additional project dependencies can be added to
pyproject.tomlas needed - The poetry.lock file should be committed to ensure reproducible builds
- Coverage threshold is set to 80% but can be adjusted in
pyproject.toml
This testing infrastructure provides a solid foundation for maintaining code quality and enabling test-driven development practices in the project.