apiops icon indicating copy to clipboard operation
apiops copied to clipboard

[Question] How to selectively publish APIs across diff instance of APIM env

Open rigmiklos opened this issue 2 months ago • 13 comments

Release version

v6.0.1.10

Question Details

Background I have 3 different instances of APIM, 1 APIM instance for each env. E.g. dev, uat and production (PRD). Each env have their own branch in Github too.

APIM between each env may not be the same, because APIs are in different stages of SDLC. More details below.

In Dev APIM will contains APIs that are Proof-of-concept, under development or for demo. When the APIs are ready from development or successful POC, i will put the APIs in UAT APIM for user testing. Once UAT is ready and user sign-off, only then the API will go into PRD APIM.

Publishing Across Env My ideal flow of ApiOps (CICD + GitOps) is for main to be single source of truth. So from main, i will generate the dev branch, then uat branch. Any new API will be a feature branch from dev.

Flow: feature -> dev -> uat -> prd

Issue There is currently no option to selectively publish specific APIs during promotion between environments.

The publisher supports only:

Full repository deployment, or Deployment by specific commit ID

As a result, APIs that are not yet ready for promotion (e.g. unfinished POCs or APIs without user sign-off) get deployed unintentionally.

Hence will like to check if anyone has similar set-up and how you configure your CI/CD?

Expected behavior

When Merge to higher env, e.g. from dev to uat Able to select which APIs that are not to be publish e.g. in Publisher Config file Only APIs that are not in it will be publish

Actual behavior

Either publish by commit id or whole repo

Reproduction Steps

  • Setup 3 different Azure API Management (APIM) instances — one for each environment: apim-dev (development) apim-uat (user acceptance testing) apim-prd (production)

  • Configure the APIOps extractor and publisher pipelines, each linked to a different GitHub branch: dev branch → apim-dev uat branch → apim-uat prd branch → apim-prd

  • In apim-dev, create several APIs for testing or proof-of-concept (POC). Example: orders-api (ready for UAT) inventory-api (still under development)

  • Run the extractor pipeline for apim-dev, pushing configurations to the dev branch.

  • Merge the dev branch into the uat branch to promote APIs to UAT.

  • Run the APIOps publisher pipeline for the uat environment.

  • Observe that the publisher pipeline publishes all APIs from the dev branch (both orders-api and inventory-api) to apim-uat.

rigmiklos avatar Oct 09 '25 17:10 rigmiklos

  Thank you for opening this issue! Please be patient while we will look into it and get back to you as this is an open source project. In the meantime make sure you take a look at the [closed issues](https://github.com/Azure/apiops/issues?q=is%3Aissue+is%3Aclosed) in case your question has already been answered. Don't forget to provide any additional information if needed (e.g. scrubbed logs, detailed feature requests,etc.).
  Whenever it's feasible, please don't hesitate to send a Pull Request (PR) our way. We'd greatly appreciate it, and we'll gladly assess and incorporate your changes.

github-actions[bot] avatar Oct 09 '25 17:10 github-actions[bot]

We achieved this by using custom config file which has api names by each environment and a Powershell script which runs in a pipeline to filter API's from artifacts based on env pipeline is running and only publishes for that specific env

mangeshparanjape avatar Oct 09 '25 17:10 mangeshparanjape

We handle environments by using 3 different stages in the run-publisher.yaml that will publish to each different env. It will take a lot of custom code to do what you want. We have a different "Main" branch for each api to keep things clean and separate so changes in one api arent accidently deployed while pushing the other. Infrastructure as code works a little different than a traditional applications CI/CD deployments strategy.

megamax34 avatar Oct 09 '25 17:10 megamax34

We achieved this by using custom config file which has api names by each environment and a Powershell script which runs in a pipeline to filter API's from artifacts based on env pipeline is running and only publishes for that specific env

Hi @mangeshparanjape

Thanks for sharing.

Just to check, do you store the APIs in a different folder from the artifacts? Also will like to get more details what the Powershell script do.

Would be great if you could share some sample.

rigmiklos avatar Oct 10 '25 01:10 rigmiklos

We handle environments by using 3 different stages in the run-publisher.yaml that will publish to each different env. It will take a lot of custom code to do what you want. We have a different "Main" branch for each api to keep things clean and separate so changes in one api arent accidently deployed while pushing the other. Infrastructure as code works a little different than a traditional applications CI/CD deployments strategy.

Hi @megamax34

Thanks for sharing.

I see, so each API has its own "Main" branch, in a sense, it will be like an orphan branch right? So for CI/CD, does it works something like e.g. API e.g. getLogs getLogs Main -> dev (APIM branch) getLogs Main -> uat (APIM branch) getLogs Main -> prd (APIM branch)

Would be great if you could also share sample of how the run-publisher.yaml, am interested in what the 3 stages do.

rigmiklos avatar Oct 10 '25 01:10 rigmiklos

Maybe we just need a feature in the APIOps override YAML configs to tell the APIOps pipeline to not promote a particular API. This way we can have APIs in dev only for proof of concepts and other use cases?

This will stop complex custom work and scripts, or multiple branches that basically break the idea of the flow

riosengineer avatar Oct 11 '25 16:10 riosengineer

We’ve implemented a per-environment artifacts layout and it works:

  • artifacts-dev/ (Dev snapshot)
  • artifacts-test/ (curated subset)
  • artifacts-prod/ (curated subset)

Each stage targets its own folder + publisher.config..yaml, and we gate stages by changed paths. This prevents Dev-only APIs from leaking upward, but it effectively behaves like three branches inside one repo—we have to manually copy APIs/backends/named values during promotion.

Given there’s no built-in way to keep a single artifacts tree while selectively promoting, could the team treat this issue as a feature request?

palmiv avatar Oct 11 '25 17:10 palmiv

Ill give a more detailed approach of our setup and why we went down the path we did when I have some time to compile it all together!

megamax34 avatar Oct 14 '25 19:10 megamax34

We’ve implemented a per-environment artifacts layout and it works:

  • artifacts-dev/ (Dev snapshot)
  • artifacts-test/ (curated subset)
  • artifacts-prod/ (curated subset)

Each stage targets its own folder + publisher.config..yaml, and we gate stages by changed paths. This prevents Dev-only APIs from leaking upward, but it effectively behaves like three branches inside one repo—we have to manually copy APIs/backends/named values during promotion.

Given there’s no built-in way to keep a single artifacts tree while selectively promoting, could the team treat this issue as a feature request?

Good Suggestion

rigmiklos avatar Nov 03 '25 17:11 rigmiklos

We’ve implemented a per-environment artifacts layout and it works:

  • artifacts-dev/ (Dev snapshot)
  • artifacts-test/ (curated subset)
  • artifacts-prod/ (curated subset)

Each stage targets its own folder + publisher.config..yaml, and we gate stages by changed paths. This prevents Dev-only APIs from leaking upward, but it effectively behaves like three branches inside one repo—we have to manually copy APIs/backends/named values during promotion.

Given there’s no built-in way to keep a single artifacts tree while selectively promoting, could the team treat this issue as a feature request?

Hi @palmiv , thanks for sharing, will like to get more details on how your structure work. Do you mean, for Dev branch, the publisher and extractor will target the artifacts-dev path. Wherease the dev branch has artifact folders of 3 env, dev, test and prod.

So when you want to promote to test. You have to copy the api from the dev folder into test folder, and so on for prod?

rigmiklos avatar Nov 06 '25 06:11 rigmiklos

@megamax34 , sure, looking forward to it

Ill give a more detailed approach of our setup and why we went down the path we did when I have some time to compile it all together!

rigmiklos avatar Nov 06 '25 06:11 rigmiklos

Maybe we just need a feature in the APIOps override YAML configs to tell the APIOps pipeline to not promote a particular API. This way we can have APIs in dev only for proof of concepts and other use cases?

This will stop complex custom work and scripts, or multiple branches that basically break the idea of the flow

Agreed with @riosengineer and @palmiv, could the team change this to a feature request, or i can do it at my end by editing the title?

rigmiklos avatar Nov 06 '25 06:11 rigmiklos

Before closing this issue.

From the approaches shared and mentioned, seems like the below are 2 viable approaches.

@megamax34 @palmiv @mangeshparanjape would like to brainstorm and get feedback on the approaches below (pros-and-cons, ways to refine it) and any other possible approach.

To anyone reading this, do also chip in.

Single Artifacts Tree with Custom Selective Publish: o Keep one unified /artifacts folder (across the 3 branches) as the catalog for all APIs o Promotion uses normal Git merges (dev → uat → prd) plus a custom script to serve as a pre-publish allowlist filter so only allowed and intended APIs are deployed o The script can reads from an allowlist (per env) and during deployment, build a temporary staging folder containing only allowed and intended APIs, for deploying to the target APIM

Per-Environment Artifacts Folders in a single branch o Maintain separate artifacts folders per environment (Dev, UAT, PRD) within the same branch o Promotion involve copying the APIs from one artifact folder (e.g. artifact-dev) to another (e.g. artifact-uat)

rigmiklos avatar Nov 14 '25 07:11 rigmiklos

how about the approach being able to work in a complete 'snapshot' mode, where the publisher compares the complete git artifact repository of a COMMIT (where the commit could be a TAG as well) with the azure APIM instance, creating, updating and deleting any config on the APIM side.

This way, it is possible to work with a versioned git repository. e.g. : DEV environment = main branch or a dev branch UAT environment = git tag version 1.2.3 PROD environment = git tag version 1.2.0

So when running the publisher and selecting the git tag version/commit_id, it will deploy a complete snapshot of that git artifact repository to the APIM instance.

The publisher should in this case be able to compare the full configuration on the APIM side with the git repository.

  1. foreach config on APIM (backends, apis, tags, policy-fragments, etc.) get a list of published items
  2. if the published item on APIM does not exist in the git repository (of that tag version or commit_id) - delete it from APIM
  3. if the published item on APIM exists in the git repository, update it.
  4. if there is a item in the git repository which does not exist on the APIM side, create it.

This way, you are able to

  • work completely versioned of your APIM configuration
  • being able to create release-notes and reference features, updates, etc.
  • being able to instantly do a rollback of a version on a specific environment

Though the order of execution of the above could be a pain so prevent dependency and reference issues.

Remcovanderwerf avatar Dec 10 '25 07:12 Remcovanderwerf

@rigmiklos

Azure APIM Pipeline Documentation

Overview

There are two main pipelines: Extractor and Publisher. I will highlight two approaches we have tried. We have an apim instance for Dev and Test/UAT that are dev tier and one premium tier for PROD. We do our work typically in the ApimDev instance in the portal, sometimes we use the vscode extension. We have also dabbled with using the postman Azure integration. Two approaches will be discussed Approach 1: One Main Branch and Approach 2: Multiple "Main" Branches. I see other approaches in this thread that are interesting which I would like to explore at some point. Thank you @palmiv and @Remcovanderwerf

Extractor Pipeline

The Extractor pipeline pulls down resources from your API Management instance based on your specifications. You specify these resources in your configuration.extractor.yaml file. Screenshot example at the end.

  • Default behavior: If nothing is specified, it will extract all available resources:

    • apiNames
    • diagnosticNames
    • loggerNames
    • productNames
    • backendNames
    • namedValueNames
    • subscriptionNames
    • tagNames
    • policyFragmentNames
    • groupNames
  • Extracted resources are placed in your artifacts folder (this location can be changed by customizing the pipelines, but we will not discuss that here since it is dangerous)

  • You can use the syntax [ignore] to exclude specific resource types from extraction, if it doesnt extract it will not deploy.

  • This document focuses on APIs and NamedValues since they are the most typical

Publisher Pipeline

The Publisher pipeline publishes/deploys anything located in your artifacts folder to your specified APIM instance. This will use your configuration.{env}.yaml file to replace values in your artifacts folder with environment specific values.

The syntax structure follows the structure of your artifacts folder. For example, here is a namedValue override:

namedValues:
  - name: backend-url
    properties:
      displayName: backend-url
      value: "https://somebackendurlserviceforapi1.com"

If you look at your folder structure under named values, you'll see:

  • A folder named backend-url (corresponds to - name: backend-url)
  • A JSON file with properties.value (corresponds to properties: value: "https://somebackendurlserviceforapi1.com")

The configuration file for the publisher is VERY important to understand.

  • Recommended: Select the "publish-all-artifacts-in-repo" option
  • Use with caution: The "publish-artifacts-in-last-commit" option will delete resources from your APIM instance if they were deleted in the last commit. Tread lightly with this option unless your DevOps strategy is absolutely perfect.

The publisher pipeline that you will most likely run is the "run-publisher.yaml" which just reaches out to the "run-publisher-with-env.yaml". "run-publisher-with-env.yaml" has all the meat and potatoes so to speak. This file goes out and grabs the executable and other resources to actually publish your pipeline. "run-publisher.yaml" you define you different pipeline stages/ apim envs to deploy to (we have a dev, test, prod stage)


Deployment Strategies

Approach 1: One Main Branch

This approach uses a single main branch with a typical git/source control strategy where dev branches are created and PRs are merged into the main branch to deploy.

Configuration Files

You would have separate configuration.extractor files for each API or API group (process groups/APIs that support an application/vendor):

  • Example: configuration.extractor.api1.yaml and configuration.extractor.api2.yaml
  • Each file lists only what is needed for that particular API
  • Named values and APIs exclusive to api1 would be defined in configuration.extractor.api1.yaml
  • This ensures you only extract and deploy that specific API without affecting other APIs that may not be ready or authorized for deployment
  • When you run the extractor you would specify configuration.extractor.api1.yaml OR configuration.extractor.api2.yaml to pull in only those resources.

Environment Configuration

You need environment-specific configuration files to replace dev values when publishing:

  • The configuration.{env}.yaml file is EXTREMELY important to the process
  • Think of it as your web.config file
  • It allows you to deploy to the next environment with the correct configs

Two options for environment files:

  1. Monolithic configuration.{env}.yaml files with all environment config values
  2. Separate files for each API and environment (must select the correct file when running the publisher)

Drawbacks

  • Artifacts folder inconsistency: Only includes what you last extracted, removing previously extracted content (e.g., extracting and publishing API2 removes API1 artifacts)
  • Branch history issues: Main branch history looks inconsistent, always jumping around to what was deployed last
  • Easy to make mistakes: Doesn't tell a good story in git history

Note: We started with this approach and didn't love it.

Image

Approach 2: Multiple "Main" Branches (Recommended)

This approach separates each API/API grouping into its own "main" branch.

Branch Structure

  • Branch naming convention: api/main-api1, api/main-api2, etc.
  • Reason for "api/" prefix: Allows you to apply Azure DevOps policies to the branch hierarchy

Configuration Files

  • One configuration.extractor.yaml file per API branch
  • Lists everything relevant to that specific API
  • One configuration.{env}.yaml file per environment
  • Contains only configurations belonging to that API

Workflow

Pull Requests created by the extractor can either:

  1. Pull into a dev branch (e.g., dev/api1-workdesc), then merge into api/main-api1
  2. Create the PR directly into the main branch (api/main-api1)

Benefits

  • Clean separation: Maintains a good history for each API
  • Reduced errors: More confidence that deployments only affect the intended API
  • Lower risk: Avoids copy-paste errors by not maintaining monolithic configuration files across all APIs

Drawbacks

  • Pipeline maintenance: When updating pipelines, you need to update each main branch, which can be annoying as APIs grow
  • Note: Pipeline upgrades are infrequent (bi-yearly or annually), so this may be acceptable
Image

Final Thoughts

APIM is interesting because you are both deploying code and infrastructure, which tend to have opposing philosophies. Maybe having a publisher config file not just for env variables but also to decide what gets published. It would be similar to the Extractor.configuration file but instead of saying what I want to pull down you specify what you want to publish/deploy. To ensure we are backing up everything we have a branch called archive. I have a logic app trigger the extractor pipeline for the archive branch which just extracts everything and completes the PR created by the extractor into the archive branch. This way we have a full complete picture of all our apim resources for recovery scenarios.

Here are some general extractor and publisher configuration files.

Extractor configuration

Image

Extractor pipeline param example for Approach 2

Image

Publisher configuration for Test env.

Image

megamax34 avatar Dec 10 '25 20:12 megamax34