Proposal: Enable CI/CD Interoperability Through SDLC Workflow Segments
Proposal: Enable CI/CD Interoperability Through SDLC Workflow Segments
Summary
Interoperability is the most critical unsolved challenge in CI/CD today. Every enterprise, team, and open source ecosystem reinvents the delivery pipeline—gluing together tools with brittle integrations and siloed logic. The result is a fragmented landscape of disconnected workflows, incompatible formats, and hardcoded behavior.
We need to stop treating SDLC automation as if it’s unique to every company. It isn’t. The steps—build, test, deploy, release, rollback—are shared. What changes is how those steps are implemented. If we can describe what a workflow is trying to do, we can stop hardcoding how it does it.
CDEvents gives us a shared structure for what happened. But to achieve true interoperability, we need a shared structure for what was supposed to happen.
That’s where Workflow Segments come in.
Segments define the intent of a unit of SDLC work. They describe which group of CDEvents constitutes a build, a test, a security scan, or a deployment—regardless of the tool that performs it. That definition becomes portable, enforceable, observable, and orchestratable across ecosystems.
With Workflow Segments, we can finally:
- Build tool-agnostic orchestration engines that work
- Align policy and automation with semantic meaning
- Support heterogeneous tools without rewriting logic
- Enable true reuse of business workflows across platforms
This proposal is not just about segments. It is about interoperability as a capability—and the realization that intent is the missing layer in CI/CD.
Philosophical Grounding
This proposal is grounded in the Sanyika Principles of Interoperability—a framework shaped by testing and discarding every other model until something clicked.
“Sanyika” means gatherer—and that’s what this work does: gather the scattered fragments of automation into a cohesive, normalized perspective.
At its core is a simple belief: Truth is not subjective—it can only be normalized.
That normalization enables orchestration across time, tools, teams, and vendors. It means defining workflows not as procedural scripts, but as declarative contracts of intent. This proposal is one such contract.
Problem Statement
CI/CD today suffers from tool fragmentation, brittle integrations, and the lack of a shared understanding of workflow intent.
Even with growing adoption of CDEvents, there’s still no common way to:
- Propogate what the role of each tool in the SDLC workflow is (not just what it does)
- Validate that a tool fulfilled its intended role
- Port workflows between systems without rewriting automation
Instead, pipelines are defined imperatively; one step at a time, tightly coupled to tools. This makes:
- Tool changes brittle
- Workflow reuse difficult
- Policy enforcement superficial
- Monitoring incomplete
Organizations waste effort rebuilding what others have already built. Vendor lock-in flourishes. Internal Developer Platforms (IDPs) become polished front-ends to brittle backends.
We don’t need more wrappers. We need a shared language of workflow intent.
Solution: Workflow Segments
Workflow Segments introduce a common, declarative contract that defines what a unit of SDLC work means.
Each segment:
- Defines the CDEvents expected for a given operation (e.g., build)
- Can be versioned, validated, and reused
- Is tool-agnostic by design
- Allows orchestrators to infer progress and intent from emitted events
This enables true interoperability:
- Orchestrate from events, not scripts
- Policy engines can validate by meaning, not metadata
- Observability tools can detect gaps in intent fulfillment, not just data capture
- Workflows become portable across Jenkins, Tekton, GitHub Actions, and more
This isn’t a spec tweak. It’s a semantic layer for DevOps.
┌───────────────────────────────────── CONDUIT EVENT ORCHESTRATOR ─────────────────────────────────────┐
│ │
│ ┌──────────────────┐ ┌─────────────────────────────┐ │
│ │ Workflow YAML │───────► │ Segment Contract Loader │ │
│ │ (Segments + DSL) │ └─────────────────────────────┘ │
| └──────────────────┘ | |
│ │ │ │
│ ▼ ▼ │
│ ┌──────────────────┐ ┌────────────────────────┐ ┌────────────────────────┐ │
│ │ Event Listener │◄────┤ CDEvent Matcher ├────►| Event Router (Graph) │ │
│ └──────────────────┘ └────────────────────────┘ └────────────────────────┘ │
│ │ │ ▲ │
│ ▼ ▼ │ │
│ ┌────────────┐ ┌────────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Tool Event │──────►│ Segment Mapper │◄──────►│ DSL Executor │ │ Policy Engine│ │
│ └────────────┘ └────────────────┘ └──────────────┘ └──────────────┘ │
│ │ │
│ ▼ │
│ ┌────────────────────────────┐ │
│ │ Observability / SLO Layer │◄────── backfill, inference, annotations ──────┘ |
│ └────────────────────────────┘ |
│ |
└──────────────────────────────────────────────────────────────────────────────────────────────────────┘
Event Flow: Reactive Graph Built from Runtime Events
┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐
│ Jenkins CI │────►│ emits: │────►│ matches │────►│ triggers │────►│ DSL-valid. │────►│ Segment OK │
│ run │ │ build.* │ │ segment def│ │ route/next │ │ / Backfill │ │ or Escalate│
└────────────┘ └────────────┘ └────────────┘ └────────────┘ └────────────┘ └────────────┘
Let’s define it together.
Conduit Workflow Segment Definitions
- change-request
- review-change-request
- artifact-store
- build
- build-deploy
- build-merge
- build-store
- deploy
- service-deploy
- verify
- ticket-associate
Definition
Base Event
A Base Event is the foundational event type in the CDEvents hierarchical event inheritance system. It serves as the root event, from which all other CDEvents, both standard and custom, inherit. This structure promotes consistency while enabling flexibility for interoperable custom event definitions.
Base Event
|
|------ CDEvent (e.g., pipelineRun.started)
| |
| |------ Custom Event (e.g., myCustomToolPipelineRun.started)
| | |
| | |------ Another Custom Event (e.g., myCustomAbstractedPipelineRun.started)
- The base event acts as the foundational event type with many of the common CDEvent fields. See here
- Inheriting from the base event we see a CDEvent which is any CDEvent defined by the CDEvents specification, e.g., pipelineRun.started.
- Inheriting from the CDEvent is the "Custom Event" that some organization, company, or user has provided. In the previous point above, we gave the example of pipelineRun.started being the CDEvent. If our custom event were to inherit from that CDEvent, than that means our custom event will act as a "pipelineRun.started".
- Inheriting from the custom event is another custom abstraction.
The base event defined a minimal set of common fields (context, subject, and customData) that all derived events must contain. Any event in the heirarchy must inherit these base fields. This ensures that all events are consistent, easy to parse, validate, extend, and promotes interoperability.
baseEvent:
type: event
fields:
- name: context
type: context
- name: subject
type: subject
- name: customData
type: object
context:
type: object
fields:
- name: id
desc: An identifier for the event.
type: string
- name: type
desc: Decribes the type of event of format <namespace>.<subject>.<predicate>.<version>
type: string
- name: source
desc: The context in which an event happened. This provides a global
uniqueness when paired with `id`.
type: URI
- name: timestamp
desc: The time of the occurrence. If the time of the occurrence is not
captured, using the time when the event was produced is sufficient.
format: rfc3339
- name: version
desc: The semantic version of the CDEVents specification.
type: string
- name: chainId
desc: A unique identifier to track an associated workflow.
type: string
- name: links
desc: An array of link objects.
type: array
subject:
type: object
fields:
- name: id
desc: An identifier for the subject.
type: string
- name: content
desc: The subject's content.
type: any
- name: source
desc: Defines the context in which the subject originated. The format and
semantics of subject.source mirrors context.source.
type: string
- name: type
desc: Describes the definition of the content's data.
type: string
Segment
| key | description | type | required | notes |
|---|---|---|---|---|
| segment | The name of the segment | string | ✅ | |
| produces | Represents the expected set of CDEvents or segments that this segment is responsible for emitting or triggering. It does not imply strict enforcement, only intent. | array<produce> | ✅ | Minimum array size is 1 |
Produce
| produce key | description | type | required | notes |
|---|---|---|---|---|
| event | The CDEvent to be expected | cdevent | ⚠️ | Mutually exclusive to segment. Only one of segment or event can be set for a given item. |
| event-schema-uri | A group of events or segments | URI | 🔁 | Conditionally required on event |
| segment | The name of the expected segment | segment | ⚠️ | Mutually exclusive to event. Only one of segment or event can be set for a given item. |
Fields
Example Segment Definition
# name of the segment
segment: the-segment
produces:
# events that are expected to be produced
- event: dev.cdevents.some.cdevent
- event: dev.cdevents.another.cdevent
- segment: another-segment-definition
Tool Types
This section is a list of tool types where each section will contain segment definitions.
SCM
The SCM tool type is any tool that fits in the source control management category which is a tool that handles versioning of source code and related assets.
change-request
A change-request segment represents when a change has been created and
merged.
example:
A GitOps workflow where a pull request is created and then merged, triggering a deployment downstream.
definition:
segment: change-request
produces:
- event: dev.cdevents.change.create
- event: dev.cdevents.change.merge
review-change-request
Similar to the change-request, but with the caveat that the changes were
reviewed.
definition:
segment: review-change-request
produces:
- event: dev.cdevents.change.create
- event: dev.cdevents.change.review
- event: dev.cdevents.change.merge
CI
The CI tool type is any tool that fits in the CI category where CI is defined as a tool that handles continuous integration activities, like build and test.
artifact-store
The artifact-store segment represents when an artifact has been both packaged
and published.
example:
A docker image being published to dockerhub after a CI build has completed.
definition:
segment: artifact-store
produces:
- event: dev.cdevents.artifact.package
- event: dev.cdevents.artifact.publish
build
The build segment represents the common CI build process of building and
testing some code.
example:
Jenkins starteds a build pipeline and compiles some code.
definition:
segment: build
required_fields:
- name: build_system
desc: which build tool was used
event: build.started
- name: change_hash
desc: hash associated with the change
event: change.create
- name: change_ref
desc: hash associated with the change
event: change.create
- name: user
desc: user or service account associated with the action
event: build.started
- name: parameters
desc: parameters that were used to trigger the build
event: build.started
- name: upload_target
desc: location where artifacts will be published
event: build.finished
- name: outcome
desc: build outcome which indicates whether the build had succeeded or otherwise
event: build.finished
produces:
- event: dev.cdevents.change.create
- event: dev.cdevents.build.queue
- event: dev.cdevents.build.started
- segment: verify
- event: dev.cdevents.build.finished
build-deploy
The build-deploy is a composite segment of the build and deploy segments.
definition:
segment: build-deploy
required_fields:
- name: trigger
desc: describes how the pipeline or build was started
event: pipelineRun.started
- segment: build
- segment: deploy
produces:
- event: dev.cdevents.pipelineRun.started
- segment: build
- segment: deploy
- event: dev.cdevents.pipelineRun.finished
build-merge
The build-merge segment represents a CI build that runs in response to a change
and merges that change upon successful completion.
example:
a GitHub PR triggers a Jenkins job.
definition:
segment: build-merge
required_fields:
- segment: build
- name: trigger
desc: describes how the pipeline or build was started
event: pipelineRun.started
- name: base_ref
desc: the base branch to merge to
event: change.created
- name: merge_strategy
desc: which strategy was used for merging, e.g. rebase, merge-commit, etc
event: change.merged
- name: merge_outcome
desc: the outcome of the merge which is an indication on whether or not the merge was successful
event: change.merged
- name: change_id
desc: identifier used by SCM tooling to distinguish the change to be merged, e.g. a PR number
event: change.created
produces:
- event: dev.cdevents.change.created
- event: dev.cdevents.pipelineRun.started
- segment: build
- event: dev.cdevents.change.merge
- event: dev.cdevents.pipelineRun.finished
build-store
The build-store is a composite segment with the build and artifact-store segments.
example:
Jenkins build job that produces something and stores that in Artifactory.
definition:
segment: build-store
required_fields:
- segment: build
- name: trigger
desc: describes how the pipeline or build was started
event: pipelineRun.started
# artifact_format seemed necessary for publishing if there
# were a separation of tools from packaging to publishing
- name: artifact_format
desc: format of the artifact
event: artifact.packaged
- name: artifact_digest
desc: digest of the artifact which is a common API for artifact repos
event: artifact.packaged
produces:
- event: dev.cdevents.pipelineRun.started
- segment: build
- segment: artifact-store
- event: dev.cdevents.pipelineRun.finished
CD
The CD tool type is any tool that fits in the CD category where CD is defined as a tool that handles continuous deployment activities, like deployments and canary evaluation.
deploy
The deploy segment represents a deployment process in a CD tool.
example:
A user triggering a Spinnaker deployment pipeline.
definition:
segment: deploy
required_fields:
- name: artifact_uri
desc: the PURL identifier for the artifact
event: deployment.started
- name: environment
desc: environment where we want to deploy the artifact to
event: deployment.started
- name: strategy
desc: deployment strategy used, e.g. blue/green, canary
event: deployment.started
- name: user
desc: user or service account triggering the deployment
event: deployment.started
- name: parameters
desc: additional parameters for deployment
event: deployment.started
- name: outcome
desc: the outcome of the deployment
event: deployment.finished
- name: endpoint
desc: endpoint/URL of where the service is accessible at
event: service.published
produces:
- event: dev.cdevents.pipelineRun.queued
- event: dev.cdevents.pipelineRun.started
- event: dev.cdevents.deployment.started
- segment: service-deploy
- event: dev.cdevents.deployment.finished
- event: dev.cdevents.pipelineRun.finished
service-deploy
The service-deploy segment represents when a deployment of a service has
occurred and the service is healthy enough to receive traffic
example:
My Kubernetes manifest was applied to Kubernetes and the readiness probe is healthy.
definition:
segment: service-deploy
produces:
- event: dev.cdevents.service.deploy
- event: dev.cdevents.service.publish
Operations
Operational tool types are tools that do not typically fit in SCM, CI, or CD. A good example is testing, ticketing, or incidents.
verify
The verify segment represents some automated testing (unit, integration, etc.)
triggered as part of the CI pipeline
example:
A CI job runs unit tests for a build.
definition:
segment: verify
produces:
- event: dev.cdevents.testsuiterun.started
- event: dev.cdevents.testcaserun.started
- event: dev.cdevents.testcaserun.finished
- event: dev.cdevents.testsuiterun.finished
ticket-associate
The ticket-associate segment represents when a ticket is referenced to some pull request.
My Jira ticket is tied to https://github.com/myawesomeorg/myawesomeservice/pull/123
definition:
segment: ticket-associate
produces:
- event: dev.cdevents.ticket.create
- event: dev.cdevents.change.create
Acknowledgments
- Practical implementations in the Conduit project
- Collaborative design by Dadisi Sanyika and Benjamin Powell
Thanks @dsanyika and @xibz for the thorough proposal!
We previously discussed documenting how to use events in real-life contexts because right now, the semantics of events are somewhat open to interpretation, which is not suitable for interoperability. This takes the idea a step further.
If I understood correctly, segments would be an integral part of the specification.
Tools that produce events may not always be able to be compliant with segments, as the pipeline definitions are often defined by end-users. Tools that consum events however would be able to reason in terms of segments.
The segments specification could be hosted in a new repo within the CDEvents org, or as a subspec within the existing spec repo.
Absolutely, our system is designed as a closed yet highly adaptable environment. Users have the flexibility to define their own workflows, which can be tailored for interoperability if desired, offering a near-infinite range of possibilities. Meanwhile, the community has the opportunity to establish foundational standards for each segment. Our approach isn't restrictive—it's not about mandating specific events in a precise sequence. Rather, it's about recognizing that certain sequences of events can optimize functionality and efficiency. We believe this collaborative framework will enhance both individual and collective outcomes.
First off, very detailed proposal and thoughts which I can very much see the need for standardised and improvement in the tool-chain.
What are the thoughts about moving the events being produced out of the segment and instead referring to a collection of events (stage) based on a strongly defined id as these are likely to be tightly controlled by the tool.
Hey, @thompson-tomo thank you for taking the time to review our proposal. Let's pause a moment to ensure we’re on the same page regarding 'moving the events.' Are you suggesting a name for the event segments?
The proposal outlines the mechanism for the semantic language of 'intent,' which guides how the SDLC workflow orchestrator manages these events. With the orchestrator as the central hub of knowledge for the SDLC workflow, you have the flexibility to group 'actual' events in a way that best suits your needs.
Ultimately, our proposal explains how a tool for centralized CDEvents management would work.
Let me try and showcase via an example.
What has been proposed is as below:
segment: build-deploy
required_fields:
- name: trigger
desc: describes how the pipeline or build was started
event: pipelineRun.started
- segment: build
- segment: deploy
produces:
- event: dev.cdevents.pipelineRun.started
- segment: build
- segment: deploy
- event: dev.cdevents.pipelineRun.finished
What I am thinking is splitting the definition in 2:
-
Tools: a way for tools to share with the orchestrator what tasks thry support -
Workflow: as per now but designed to foster simplification.
A tool is defined as per below with the focus on the events which are produced
name: github
tasks
- name: build-deploy
produces:
- event: dev.cdevents.pipelineRun.started
when: pre
- event: dev.cdevents.pipelineRun.finished
. when: post
- event: dev.cdevents.pipelineRun.xyx
. when: failure
required_fields:
- name: trigger
desc: describes how the pipeline or build was started
performs:
- task: build
- task: deploy
This way your segment in the Workflow can become:
segment: build application
task: build-deploy
tool: github
variables:
- name: trigger
. source: pipelineRun.started
value: source
implements:
- segment: build
- segment: deploy
By having this approach it is easy for a user of the orchestrator to efficiently move to a different tool.
@thompson-tomo I love the way you are thinking about the problem. What you're describing is accounted for but not in the Workflow Segments which describe the system generics. It would be in the Workflow YAML that describes the actual workflow process (commit --> SCM tool --> PRB tool --> merge operation tool --> build tool --> artifact repository) that is where you would declare the brand of the tool if you wanted to take advantage of the tool specific features.
The goal of the Workflow Segment is to define the minimum amount of events that occur in every build. This declaration (and community discussion) bind the interoperability to a common place.
We the people define and refine the interoperability definitions together.
Ahh ok, I think I am getting a better understanding now.
Based on that recent feedback I am thinking of the following:
Workflow Segment
Should these be referred to as a role instead?
role: build-deploy
needs:
- name: reference
desc: a reference to use
extends:
- role: build
- role: deploy
skills:
- event: dev.cdevents.pipelineRun.started
- event: dev.cdevents.pipelineRun.finished
Tool
name: github
roles:
- role: build-deploy
needs:
- name: trigger
desc: describes how the pipeline or build was started
- role: build
- role: deploy
Workflow
Created via the orchestrator
name: mobile application
conditions:
-
stages:
- name: build application
requiredrole: build-deploy
allocatedtool: github
variables:
- name: trigger
source: pipelineRun.started
value: source
In the end, we end up with a mission statement along the lines of "Enable workflow's to be completed by disparate tools which are equipped with the skills to perform their role in the system."
After Thinking about this some more over the past few days, I am wondering if the CD Events project is the right location to be defining this interoperability.
Should we instead be starting a new project, perhaps OpenInteroperability which has the mission I stated above:
Enable workflow's to be completed by disparate tools which are equipped with the skills to perform their role in the system.
That way CD Events can focus on it's mission of:
A common specification for Continuous Delivery events, enabling interoperability in the complete software production ecosystem
Reason for this is to enable the orchestrator to work for not just CD but any business needs. In that case CD Events would be a contributor of registered roles to the OpenInteroperability.
Hey @thompson-tomo, I really appreciate your continued thinking on this. You're leaning into some of the deeper questions around interoperability, and it's encouraging to see how quickly you're mapping those questions into a structured model. That's the kind of contribution that moves us forward.
To clarify a few things:
You're absolutely right that interoperability at the workflow level is a broader challenge than any one tool; the specific implementation you're describing is within the scope of the CDEvents project. The Workflow Segment concept is not a general-purpose abstraction for systems in general; it is a CDEvents use case, rooted in the core mission of enabling software delivery systems to interoperate through a shared vocabulary of events.
That’s where Conduit comes in: a CDF project that will reference and extend the CDEvents specification to allow orchestrators to reason about intent, not just event streams. Conduit doesn’t move beyond the scope of CI/CD, it dives deeper into it. The goal is to allow tools to participate meaningfully in delivery workflows regardless of vendor, language, or architecture, and that means aligning on both event and intent semantics within the SDLC domain. Always it is Continuous Software Delivery.
As for the interoperability principles you’re picking up on—they are not a separate project. They function like design patterns: reusable, transferable, but anchored in specific use cases. The same way Gang of Four patterns weren’t a library, but a shared language for thinking and building. These principles were defined to inform the work here—not to spin out a separate mission.
So rather than forking into something like “OpenInteroperability,” I’d invite you to dive deeper into this effort: to help refine the models, to challenge assumptions, and to ensure that the interoperability we define here truly enables tools to work together with both technical and semantic alignment.
You’re definitely thinking in the right direction. That curiosity is exactly what this project needs. The framing you’re exploring (roles, skills, missions) reflects an intuition for modular, interoperable systems. While the specific model you're proposing isn't aligned with how Workflow Segments are defined today, as the project unfolds, I think you’ll see how the architecture enables more flexibility and composability than may be apparent right now. The foundation we’re building is intentionally open but grounded in the concrete semantics of software delivery.
Let’s build this here, together, with precision.