GazeCapture
GazeCapture copied to clipboard
feat: Add comprehensive Python testing infrastructure with Poetry
Add Comprehensive Python Testing Infrastructure
Summary
This PR establishes a modern testing infrastructure for the iTracker PyTorch implementation using Poetry as the package manager and pytest as the testing framework. The setup provides a solid foundation for writing unit and integration tests with comprehensive coverage reporting.
Changes Made
Package Management
- Poetry Setup: Created
pyproject.tomlwith complete Poetry configuration - Dependency Migration: Migrated all dependencies from
requirements.txtto Poetry - Development Dependencies: Added pytest, pytest-cov, and pytest-mock for testing
Testing Configuration
- pytest Configuration:
- Configured test discovery patterns
- Set up coverage reporting with 80% threshold
- Added HTML and XML coverage report generation
- Configured strict markers and verbose output
- Coverage Settings:
- Excluded test files and common patterns from coverage
- Added branch coverage tracking
- Configured multiple report formats
Test Structure
- Directory Layout:
pytorch/ ├── tests/ │ ├── __init__.py │ ├── conftest.py │ ├── test_setup_validation.py │ ├── test_infrastructure_validation.py │ ├── unit/ │ │ └── __init__.py │ └── integration/ │ └── __init__.py
Shared Fixtures (conftest.py)
Created comprehensive fixtures for testing iTracker components:
temp_dir: Temporary directory managementmock_config: Configuration dictionary for testingsample_image,sample_face_image,sample_eye_images: Test image generationsample_face_grid,sample_gaze_target: Mock tensor datamock_dataset_metadata: Dataset metadata for testingmock_model_checkpoint: Model checkpoint creationdevice: PyTorch device selection (CPU/CUDA)random_seed: Reproducible test executionmock_dataloader_batch: Mock DataLoader batch
Test Markers
@pytest.mark.unit: For unit tests@pytest.mark.integration: For integration tests@pytest.mark.slow: For slow-running tests
Git Configuration
Updated .gitignore with:
- Poetry artifacts (poetry.lock, dist/, .venv/)
- Testing artifacts (.pytest_cache/, .coverage, htmlcov/, coverage.xml)
- Claude settings (.claude/*)
- Common Python/IDE patterns
How to Use
Running Tests
-
Install Poetry (if not already installed):
pipx install poetry -
Install dependencies:
cd pytorch poetry install -
Run tests:
# Run all tests poetry run test # or poetry run tests # Run specific marker poetry run pytest -m unit poetry run pytest -m integration # Run with coverage report poetry run pytest --cov
Writing New Tests
- Unit Tests: Place in
tests/unit/directory - Integration Tests: Place in
tests/integration/directory - Use fixtures from
conftest.pyfor common test data - Add markers to categorize tests appropriately
Coverage Reports
- Terminal: Shown after each test run
- HTML: Generated in
htmlcov/directory - XML: Generated as
coverage.xmlfor CI integration
Notes
- The project uses older versions of PyTorch (1.1.0) and other dependencies as specified in the original requirements.txt
- Poetry is configured to work alongside the existing codebase without disrupting the current structure
- All testing infrastructure is contained within the
pytorch/directory - The setup is designed to be immediately usable - developers can start writing tests right away
Next Steps
With this infrastructure in place, the team can now:
- Write unit tests for individual components (ITrackerModel, ITrackerData)
- Create integration tests for the full training/inference pipeline
- Add performance benchmarks using the
@pytest.mark.slowmarker - Integrate coverage reporting into CI/CD pipelines