moq.analyzers
moq.analyzers copied to clipboard
Refactor PowerShell Performance Testing Scripts into Modular Architecture
Problem Statement
The performance testing PowerShell scripts in build/scripts/perf/ have grown organically and now exhibit several architectural issues that impact maintainability:
Current Issues Identified
- Code Duplication: The
Show-Invocationfunction is duplicated betweenDiffPerfToBaseline.ps1andPerfCore.ps1 - No Modular Structure: All functionality is embedded within individual scripts (400 total lines across 4 scripts)
- Inconsistent Error Handling: Different scripts use varying approaches to error handling and logging
- Path Resolution Duplication: Multiple scripts duplicate the same
RepoRootresolution logic - No Shared Constants: Hard-coded paths and configuration scattered across files
Current Script Structure
- PerfCore.ps1 (151 lines) - Main entry point called from GitHub Actions
- DiffPerfToBaseline.ps1 (154 lines) - Orchestrates baseline comparison workflow
- RunPerfTests.ps1 (60 lines) - Executes performance benchmarks
- ComparePerfResults.ps1 (35 lines) - Analyzes and compares benchmark results
Dependencies Discovered
PerfCore.ps1 (GitHub Actions Entry Point)
├── DiffPerfToBaseline.ps1 (when -diff flag used)
│ ├── RunPerfTests.ps1 (for baseline and current runs)
│ └── ComparePerfResults.ps1 (for result analysis)
└── RunPerfTests.ps1 (when -diff flag not used)
Proposed Solution: Modular PowerShell Architecture
Phase 1: Create Shared Utility Module
Create build/scripts/perf/PerfUtils.psm1 with shared functions:
Show-Invocation- Command invocation loggingGet-RepoRoot- Standardized repository root resolutionEnsure-Folder- Directory creation utilityWrite-PerfLog- Standardized logging with consistent formattingInvoke-PerfCommand- Centralized command execution with error handling
Phase 2: Extract Configuration Management
Create build/scripts/perf/PerfConfig.psm1 for:
- Default paths and constants
- Parameter validation and normalization
- Environment detection (CI vs local)
Phase 3: Create Core Business Logic Modules
PerfTestRunner.psm1
Invoke-PerformanceTests- Core benchmark execution logicGet-BenchmarkProjects- Project discovery and filtering
PerfResultsAnalyzer.psm1
Compare-PerformanceResults- Result comparison logicExport-PerformanceReport- Report generation
PerfBaselineManager.psm1
Get-BaselineResults- Baseline retrieval and cachingNew-BaselineWorkspace- Git worktree management for baselines
Phase 4: Refactor Entry Point Scripts
Transform existing scripts into thin orchestration layers that:
- Import required modules
- Parse and validate parameters
- Orchestrate module function calls
- Handle top-level error scenarios
Implementation Plan
Step 1: Analysis and Planning ✅
- [x] Inventory existing scripts and functions
- [x] Map dependencies and call patterns
- [x] Identify shared functionality and duplication
- [x] Design modular architecture
Step 2: Create Shared Utilities Module
- [ ] Create
PerfUtils.psm1withShow-Invocationfunction - [ ] Add
Get-RepoRoot,Ensure-Folder, and logging utilities - [ ] Add comprehensive unit tests for utility functions
- [ ] Update
DiffPerfToBaseline.ps1to import and use the module - [ ] Update
PerfCore.ps1to import and use the module
Step 3: Extract Configuration Management
- [ ] Create
PerfConfig.psm1with default paths and constants - [ ] Add parameter validation and environment detection functions
- [ ] Update all scripts to use centralized configuration
Step 4: Create Business Logic Modules
- [ ] Create
PerfTestRunner.psm1and extract benchmark execution logic - [ ] Create
PerfResultsAnalyzer.psm1and extract comparison logic - [ ] Create
PerfBaselineManager.psm1and extract baseline management - [ ] Add comprehensive unit tests for all business logic modules
Step 5: Refactor Entry Point Scripts
- [ ] Simplify
PerfCore.ps1to orchestration-only - [ ] Simplify
DiffPerfToBaseline.ps1to workflow orchestration - [ ] Simplify
RunPerfTests.ps1andComparePerfResults.ps1to thin wrappers - [ ] Maintain backward compatibility for existing GitHub Actions integration
Step 6: Testing and Validation
- [ ] Add unit tests for all modules using Pester testing framework
- [ ] Add integration tests for end-to-end workflows
- [ ] Validate GitHub Actions integration works unchanged
- [ ] Performance regression testing on the refactored architecture
Step 7: Documentation and Cleanup
- [ ] Create/update
build/scripts/perf/README.mdwith new architecture - [ ] Add inline documentation and examples for all public module functions
- [ ] Remove duplicated code and consolidate error handling
- [ ] Add module manifest files for proper PowerShell module structure
Success Criteria
- [ ] Zero code duplication between performance testing scripts
- [ ] All shared functionality consolidated into reusable modules
- [ ] Consistent error handling and logging across all scripts
- [ ] Comprehensive unit test coverage (>80%) for all modules
- [ ] GitHub Actions integration continues to work without modification
- [ ] Reduced total lines of code through consolidation
- [ ] Improved debugging experience with consistent
Show-Invocationlogging
Risks and Mitigation
Risk: Breaking GitHub Actions integration Mitigation: Maintain existing script interfaces and parameter contracts
Risk: Introducing new bugs during refactoring
Mitigation: Comprehensive testing strategy with both unit and integration tests
Risk: PowerShell module loading complexity Mitigation: Use explicit Import-Module statements and test module loading scenarios
Resources
- PowerShell Module Best Practices
- Pester Testing Framework
- Current Performance Scripts:
build/scripts/perf/
Backlinks:
- PR: https://github.com/rjmurillo/moq.analyzers/pull/571
- Comment: https://github.com/rjmurillo/moq.analyzers/pull/571#discussion_r2221235499
- Requested by: @rjmurillo