plugin-throttling.js icon indicating copy to clipboard operation
plugin-throttling.js copied to clipboard

Replace Bottleneck with p-queue for throttling

Open Copilot opened this issue 3 months ago • 8 comments

Bottleneck hasn't been updated since 2023 and uses outdated patterns. This replaces it with p-queue while maintaining API compatibility and accurately replicating Bottleneck's actual behavior through deep source code analysis.

Changes

  • Dependencies: Removed bottleneck, added [email protected]
  • Throttling: Built ThrottleGroup wrapper around p-queue with shared queue per group (matching Bottleneck's per-group limit behavior)
  • Retries: Implemented ThrottleLimiter for retry logic with exponential backoff
  • Events: Switched to Node.js EventEmitter from Bottleneck's custom event system
  • Redis clustering: Replaced Bottleneck-specific types with generic Connection interface accepting any object with disconnect() and on() methods
  • Concurrency limits: Preserved original Bottleneck configuration (maxConcurrent=10 for global group)
  • minTime implementation: Uses p-queue's intervalCap and interval options to enforce minimum delay between request starts
  • AsyncLock: Implemented custom lock class to serialize queue submissions (mimics Bottleneck's internal Sync locks)

Breaking Changes

The Bottleneck parameter is removed from throttle options:

// Before
import Bottleneck from "bottleneck";
const octokit = new Octokit({
  throttle: {
    Bottleneck,  // No longer needed
    onRateLimit: () => true,
    onSecondaryRateLimit: () => true
  }
});

// After
const octokit = new Octokit({
  throttle: {
    onRateLimit: () => true,
    onSecondaryRateLimit: () => true
  }
});

Redis clustering still works but connection objects must now implement the generic interface instead of using Bottleneck's connection classes.

Implementation Notes

How It Works (Matching Bottleneck)

Through careful study of Bottleneck's source code, discovered the key behavior:

  • Bottleneck uses internal Sync locks that serialize submission operations (_submitLock and _registerLock)
  • Even with maxConcurrent=10, job submissions are processed one at a time
  • This creates natural serialization with async operations while still allowing concurrent execution

Our implementation replicates this with:

  • AsyncLock class: Serializes queue submission operations (mimics _submitLock)
  • Per-group limits: maxConcurrent applies to entire group, shared across all keys
  • minTime: Enforces minimum delay between request starts using p-queue's interval limiting
  • Defaults: maxConcurrent=unlimited, minTime=0 (matching Bottleneck's null and 0 defaults)

Current Configuration

  • Global group: maxConcurrent=10, no minTime → allows up to 10 concurrent requests
  • Auth group: maxConcurrent=1, no minTime → serial execution
  • Write group: maxConcurrent=1, minTime=1000ms → serial with 1s between starts
  • Search group: maxConcurrent=1, minTime=2000ms → serial with 2s between starts
  • Notifications group: maxConcurrent=1, minTime=3000ms → serial with 3s between starts

Testing

All tests passing: 30/31 tests pass (1 skipped)

  • All retry tests pass (secondary rate limit + primary rate limit)
  • All request ordering/serialization tests pass
  • Concurrency tests pass
  • Error event handling working correctly

Coverage

  • 98.09% lines/statements covered
  • 96.03% branches covered
  • Uncovered code: Defensive error handling in AsyncLock (not triggered in normal operation)
Original prompt

replace Bottleneck with a modern alternative, while keeping Redis clustering available as an opttion


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copilot avatar Nov 08 '25 15:11 Copilot