Co2L icon indicating copy to clipboard operation
Co2L copied to clipboard

feat: Set up comprehensive Python testing infrastructure

Open llbbl opened this issue 6 months ago • 0 comments

Set up Python Testing Infrastructure

Summary

This PR establishes a comprehensive testing infrastructure for the Co^2L (Contrastive Continual Learning) project using Poetry as the package manager and pytest as the testing framework.

Changes Made

Package Management

  • Poetry Setup: Created pyproject.toml with Poetry configuration
  • Dependencies: Added all required dependencies identified from the codebase:
    • Core: torch, torchvision, numpy, scipy, pillow
    • Logging: tensorboard-logger
    • Testing: pytest, pytest-cov, pytest-mock

Testing Configuration

  • pytest Configuration: Comprehensive pytest setup in pyproject.toml with:

    • Test discovery patterns for test_*.py and *_test.py files
    • Coverage reporting with 80% threshold
    • HTML and XML coverage report generation
    • Custom test markers: unit, integration, slow
  • Coverage Settings: Configured to:

    • Include source code and exclude test files, cache directories
    • Generate reports in multiple formats (terminal, HTML, XML)
    • Exclude common non-testable code patterns

Directory Structure

  • tests/ - Main test directory
    • tests/unit/ - Unit tests
    • tests/integration/ - Integration tests
    • tests/conftest.py - Shared pytest fixtures
    • tests/test_infrastructure.py - Validation tests

Shared Fixtures (tests/conftest.py)

Comprehensive fixtures tailored for ML/continual learning testing:

  • temp_dir - Temporary directories for test files
  • mock_args - Mock command-line arguments
  • sample_tensor, sample_labels - Sample data for testing
  • mock_model, mock_optimizer - Mock ML components
  • sample_memory_buffer - Mock continual learning memory
  • set_random_seeds - Ensure reproducible tests

Development Environment

  • .gitignore: Added entries for testing artifacts, Python cache files, ML model files, and development tools
  • Validation: Created infrastructure validation tests that verify all components work correctly

Running Tests

After this PR, you can run tests using:

# Install dependencies
poetry install

# Run all tests
poetry run pytest

# Run with coverage
poetry run pytest --cov

# Run specific test types
poetry run pytest -m unit          # Unit tests only
poetry run pytest -m integration   # Integration tests only
poetry run pytest -m "not slow"    # Exclude slow tests

# Run with verbose output
poetry run pytest -v

Coverage Reporting

Coverage reports are generated in multiple formats:

  • Terminal: Immediate feedback with missing line numbers
  • HTML: Detailed report in htmlcov/ directory
  • XML: Machine-readable format in coverage.xml

Validation

All infrastructure components have been validated:

  • ✅ Dependencies install correctly
  • ✅ Pytest discovers and runs tests
  • ✅ Coverage reporting generates properly
  • ✅ Custom markers work for test filtering
  • ✅ Shared fixtures are accessible to tests
  • ✅ Project modules can be imported in tests

Next Steps

This infrastructure is now ready for developers to write comprehensive tests for:

  • Individual utility functions (util.py)
  • Dataset loading and preprocessing (datasets.py)
  • Model architectures (networks/)
  • Loss functions (losses_negative_only.py)
  • Training and evaluation scripts (main*.py)

The setup provides everything needed for test-driven development and maintaining high code quality standards.

🤖 Generated with Claude Code

llbbl avatar Sep 01 '25 20:09 llbbl