nopecha-scripts icon indicating copy to clipboard operation
nopecha-scripts copied to clipboard

feat: Set up comprehensive Python testing infrastructure

Open llbbl opened this issue 6 months ago • 0 comments

Set Up Python Testing Infrastructure

Summary

This PR establishes a comprehensive testing infrastructure for the Python project, providing a ready-to-use environment for writing and running tests with proper coverage tracking and reporting.

Changes Made

Package Manager Setup

  • ✅ Configured Poetry as the package manager
  • ✅ Created pyproject.toml with complete project configuration
  • ✅ Set package-mode = false since this is primarily for dependency management

Testing Dependencies

Added the following development dependencies:

  • pytest (^8.0.0) - Main testing framework
  • pytest-cov (^4.1.0) - Coverage reporting integration
  • pytest-mock (^3.12.0) - Enhanced mocking capabilities

Testing Configuration

Configured comprehensive testing settings in pyproject.toml:

  • Test discovery: Configured patterns for test files, classes, and functions
  • Pytest options: Strict markers, verbose output, short tracebacks
  • Custom markers: unit, integration, and slow for test categorization
  • Coverage settings:
    • Source directories specified
    • Comprehensive exclusion patterns
    • Multiple report formats (HTML, XML, terminal)
    • 80% coverage threshold target

Directory Structure

Created proper testing hierarchy:

tests/
├── __init__.py
├── conftest.py          # Shared fixtures
├── test_validation.py   # Infrastructure validation tests
├── unit/
│   └── __init__.py
└── integration/
    └── __init__.py

Test Fixtures

Added comprehensive fixtures in conftest.py:

  • temp_dir - Temporary directory management
  • mock_config - Configuration object mocking
  • mock_logger - Logger mocking
  • sample_data - Test data generation
  • mock_api_response - API response mocking
  • mock_file_system - File system structure mocking
  • reset_environment - Environment variable cleanup
  • mock_database - Database connection mocking
  • capture_logs - Log message capture
  • benchmark_timer - Simple performance timing

Additional Configuration

  • Updated .gitignore with:
    • Testing artifacts (.pytest_cache/, coverage.xml, htmlcov/)
    • Claude settings (.claude/*)
    • IDE files and temporary files
    • Note: poetry.lock is intentionally NOT ignored

Validation

  • Created test_validation.py with 11 tests to verify:
    • Testing infrastructure is properly configured
    • All fixtures are working correctly
    • Markers are properly registered
    • Coverage tools are integrated

How to Use

Running Tests

Basic test execution:

poetry run pytest

With coverage reporting:

poetry run pytest --cov=python-examples --cov-report=html --cov-report=xml --cov-report=term-missing

Run specific test categories:

poetry run pytest -m unit        # Unit tests only
poetry run pytest -m integration # Integration tests only
poetry run pytest -m "not slow"  # Exclude slow tests

Writing Tests

  1. Unit tests: Place in tests/unit/ directory
  2. Integration tests: Place in tests/integration/ directory
  3. Use fixtures: Import from conftest.py automatically
  4. Mark tests: Use @pytest.mark.unit, @pytest.mark.integration, or @pytest.mark.slow

Example test:

import pytest

@pytest.mark.unit
def test_example_function(mock_config, temp_dir):
    # Your test implementation
    assert True

Notes

  • The project uses Poetry in non-package mode since it's primarily for dependency management
  • Coverage is configured but currently shows 0% as there are no source files to test yet
  • All testing dependencies are properly isolated in the development group
  • The infrastructure is ready for immediate test development

llbbl avatar Jun 15 '25 17:06 llbbl