feat: Set up comprehensive Python testing infrastructure
Set up Python Testing Infrastructure
Summary
This PR establishes a complete testing infrastructure for the Home Assistant Tion BTLE custom component project using Poetry as the package manager.
Changes Made
-
Package Management: Configured Poetry with
pyproject.tomlincluding project metadata and dependencies -
Testing Dependencies: Added core testing packages:
pytest(^8.0.0) - Main testing frameworkpytest-cov(^4.0.0) - Coverage reportingpytest-mock(^3.12.0) - Mocking utilitiespytest-asyncio(^0.23.0) - Async testing support
-
Testing Configuration: Comprehensive pytest configuration in
pyproject.toml:- Test discovery patterns for files, classes, and functions
- Coverage settings with 80% threshold requirement
- HTML and XML coverage report generation
- Custom test markers:
unit,integration,slow - Async testing mode set to auto
-
Directory Structure:
tests/ ├── __init__.py ├── conftest.py # Shared fixtures ├── test_validation.py # Infrastructure validation tests ├── unit/ │ └── __init__.py └── integration/ └── __init__.py -
Shared Fixtures: Created comprehensive fixtures in
conftest.py:temp_dir- Temporary directory for test filesmock_config_entry- Mock Home Assistant config entrymock_hass- Mock Home Assistant instancemock_ble_device- Mock Bluetooth devicemock_tion_device- Mock Tion device with realistic responsessample_device_data- Realistic test data
-
Development Scripts: Poetry commands for easy testing:
poetry run test- Run all testspoetry run tests- Alternative command (both work identically)
-
Git Configuration: Updated
.gitignorewith:- Testing artifacts (
.pytest_cache/,.coverage,htmlcov/,coverage.xml) - Claude Code settings (
.claude/*) - Python build artifacts and virtual environments
- IDE and OS specific files
- Testing artifacts (
Instructions for Running Tests
-
Install dependencies:
poetry install -
Run all tests:
poetry run test # or poetry run tests -
Run specific test categories:
poetry run pytest -m unit # Unit tests only poetry run pytest -m integration # Integration tests only poetry run pytest -m "not slow" # Exclude slow tests -
Generate coverage reports:
poetry run pytest --cov-report=html # HTML report in htmlcov/ poetry run pytest --cov-report=xml # XML report for CI -
Run with different verbosity:
poetry run pytest -v # Verbose poetry run pytest -vv # Very verbose poetry run pytest -q # Quiet
Validation
The setup includes validation tests (tests/test_validation.py) that verify:
- ✅ Project structure exists correctly
- ✅ Basic imports work (where dependencies are available)
- ✅ Pytest configuration is working
- ✅ Custom test markers function properly
- ✅ Shared fixtures are accessible
- ✅ Async testing capabilities work
- ✅ Coverage reporting generates properly
Test Results: All 9 validation tests pass, confirming the testing infrastructure is ready for use.
Configuration Choices
- Poetry over pip: Chosen for better dependency management and lock file support
- Coverage threshold: Set to 80% to ensure good test coverage without being overly restrictive
- Test markers: Added
unit,integration, andslowfor flexible test categorization - Async support: Enabled automatic async test handling for Home Assistant's async nature
- Multiple output formats: Coverage reports in both HTML (development) and XML (CI) formats
Next Steps
Developers can now:
- Write unit tests in
tests/unit/ - Write integration tests in
tests/integration/ - Use the provided fixtures for consistent test setup
- Run tests locally with immediate feedback
- Generate coverage reports to identify untested code
The testing infrastructure is production-ready and follows Python testing best practices.