Add comprehensive testing infrastructure with Poetry and pytest
About UnitSeeker
Hi! This PR is part of the UnitSeeker project, a human-guided initiative to help Python repositories establish testing infrastructure.
Key points:
- Human-approved: Every PR is manually approved before work begins
- Semi-automated with oversight: Created and controlled via a homegrown wrapper around Claude Code with human quality control
- Infrastructure only: This PR intentionally contains only the testing setup without actual unit tests
- Your repository, your rules: Feel free to modify, reject, or request changes - all constructive feedback is welcome
- Follow-up support: All responses and discussions are personally written, not automated
Learn more about the project and see the stats on our progress at https://unitseeker.llbbl.com/
Summary
This PR adds a complete testing infrastructure to the audio-webui project, providing a solid foundation for writing and running tests. The setup uses Poetry as the package manager and pytest as the testing framework.
Changes Made
Package Management
- ✅ Created
pyproject.tomlwith Poetry configuration - ✅ Configured project metadata and package structure
- ✅ Set Python version requirement (^3.10)
- ✅ Added development dependencies group
Testing Dependencies
- ✅ pytest (^8.0.0) - Main testing framework
- ✅ pytest-cov (^4.1.0) - Coverage reporting
- ✅ pytest-mock (^3.12.0) - Mocking utilities
Testing Configuration
-
✅ Comprehensive pytest configuration in
pyproject.toml:- Test discovery patterns
- Coverage settings with 80% threshold
- HTML and XML coverage reports
- Strict markers and config validation
- Custom markers:
unit,integration,slow
-
✅ Coverage configuration:
- Tracks all main packages (webui, hubert, autodebug, setup_tools, simplestyle)
- Excludes test files, data, and build artifacts
- Precision reporting with missing line indicators
Directory Structure
tests/
├── __init__.py
├── conftest.py # Shared fixtures
├── test_infrastructure.py # Validation tests
├── unit/
│ └── __init__.py
└── integration/
└── __init__.py
Shared Fixtures (conftest.py)
Created comprehensive test fixtures including:
temp_dir/temp_file- Temporary filesystem resourcesmock_config- Mock configuration dictionariesmock_env_vars- Environment variable mockingmock_torch/mock_gradio/mock_huggingface_model- ML framework mockssample_audio_path/sample_model_config- Test data helpersmock_file_system- Mock directory structureproject_root- Project path helperreset_environment- Auto-reset environment state
Validation Tests
- ✅ 23 validation tests to verify infrastructure setup
- ✅ Tests for pytest installation and configuration
- ✅ Tests for all custom markers
- ✅ Fixture validation tests
- ✅ Package import tests
- ✅ Coverage collection verification
Development Scripts
Added convenient Poetry scripts:
poetry run test # Run tests
poetry run tests # Alternative command
Both commands support all standard pytest options:
poetry run test -v # Verbose output
poetry run test -k "test_name" # Run specific tests
poetry run test -m unit # Run only unit tests
poetry run test --no-cov # Skip coverage
poetry run test --cov-report=html # Generate HTML report
Updated .gitignore
Added entries for:
- Testing artifacts (
.pytest_cache/,.coverage,htmlcov/,coverage.xml) - Claude settings (
.claude/*) - Build artifacts (
build/,dist/,*.egg-info/) - Virtual environments
- IDE files
- OS-specific files
Note: poetry.lock is intentionally tracked in version control.
Running Tests
First Time Setup
# Install dependencies
poetry install --only dev
# Or install all dependencies including dev
poetry install
Running Tests
# Run all tests with coverage
poetry run test
# Run without coverage
poetry run test --no-cov
# Run specific test file
poetry run test tests/test_infrastructure.py
# Run tests with specific marker
poetry run test -m unit
poetry run test -m integration
poetry run test -m "not slow"
# Run with verbose output
poetry run test -v
# Run specific test by name
poetry run test -k "test_pytest_working"
Coverage Reports
After running tests with coverage, reports are generated in:
- Terminal: Summary with missing lines
- HTML:
htmlcov/index.html(open in browser) - XML:
coverage.xml(for CI/CD integration)
# View HTML coverage report
poetry run test
# Then open htmlcov/index.html in your browser
Validation
All validation tests pass successfully:
✅ 23 tests passed
✅ Poetry installation working
✅ All fixtures functional
✅ Coverage reporting configured
✅ Custom markers working
✅ Package imports successful
Notes
Coverage Threshold
The current configuration requires 80% test coverage. Since this PR only adds infrastructure without actual unit tests, the coverage threshold will fail until tests are written. To run tests without failing on coverage:
poetry run test --no-cov
Or temporarily, you can adjust the threshold in pyproject.toml:
[tool.pytest.ini_options]
addopts = [
# ... other options ...
"--cov-fail-under=0", # Temporarily set to 0
]
Python Version
The project is configured for Python 3.10+. The original readme mentioned 3.10 specifically due to TTS library compatibility, but the Poetry config allows newer versions. Adjust if needed:
[tool.poetry.dependencies]
python = "^3.10,<3.11" # Restrict to 3.10 only if needed
Next Steps
This PR provides the foundation. Suggested next steps:
- Start writing unit tests for individual modules
- Add integration tests for component interactions
- Set up CI/CD pipeline (GitHub Actions) to run tests automatically
- Gradually increase test coverage toward the 80% goal
- Add pre-commit hooks to run tests before commits
Testing Philosophy
The infrastructure is set up with best practices:
- Isolated tests: Each test is independent with automatic cleanup
- Comprehensive fixtures: Reusable test helpers to avoid duplication
- Clear markers: Organize tests by type (unit/integration/slow)
- Coverage tracking: Measure and improve test coverage over time
- Developer-friendly: Simple commands and clear output
Questions or concerns? Feel free to comment, request changes, or close this PR if it doesn't fit your needs. I'm happy to make adjustments based on your feedback!