test-infra
test-infra copied to clipboard
Centralized Prow configuration controller
What would you like to be added: A controller that serves all prow and job YAML configurations.
Current State The config agent currently reloads files from local storage at every fixed interval, requiring configurations to be mounted to each Prow component.
Proposed Change Introduce a dedicated controller to serve configurations, eliminating the need for local storage. The controller will use REST API or gRPC, with gRPC enabling event-based reloading.
Implementation An extra option can be added to the config agent to switch between local and remote configuration loading, ensuring backward compatibility.
Benefit This eliminates the need to mount configurations to each Prow component, simplifying the setup.
Discussion Points We should consider any potential refactoring challenges or impacts this change might have on the existing system.
/cc @cjwagner @BenTheElder @stevekuznetsov @petr-muller @smg247 @jmguzik @hongkailiu
/sig testing
/area prow
What are the benefits to making configuration load a network hop versus a filesystem call? Naively, mounting in volumes sounds like a faster and less error-prone process - what do you envision Prow would do when the network flakes out and it can't reach the configuration server?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten