runner
runner copied to clipboard
Using Environments without creating deployment automatically
Current Situation: Every pipeline step that uses environments automatically creates a new deployment. This seems to be wanted behavior.
Problem
Access to an environment
might be also needed for other reasons than deployments. Like running integration tests (deployment already done; we want to assure correct behavior of latest deployment)
Possible Solution
Can we add an option to avoid an automatic deployment always when using environment
?
An idea might be to set an environment variable like AUTO_DEPLOYMENT=false
.
Additional information
The exact same thing bugs me as well.
I would even go one step further and decouple the deployments from environments completely by default.
Imho every workflow should be able to be a deployment task - or not. Instead of using the environment to determine this, we could just have another string key in the yaml (similar to the "concurrency" key).
I have the same problem. I want to limit deployments only on protected branches after the pull request has been merged, but I want to use environment secrets at the time of the pull request as well. Specifically, it is a case of executing terraform plan in a pull request and terraform apply in an action that runs on a protected branch after the pull request is merged. Currently, it is not possible to limit the deployment to a protected branch while sharing the environment secrets to actions on non-protected branches.
Temporary Workaround: use github's api in actions to delete all deployments which match our sha.
We can use the github-script action., which lets us use octokit.js in the action
Our job will need the deployments: write
permission.
- name: Delete Previous deployments
uses: actions/github-script@v6
with:
script: |
const deployments = await github.rest.repos.listDeployments({
owner: context.repo.owner,
repo: context.repo.repo,
sha: context.sha
});
await Promise.all(
deployments.data.map(async (deployment) => {
# we can only delete inactive deployments, so let's deactivate them first
await github.rest.repos.createDeploymentStatus({
owner: context.repo.owner,
repo: context.repo.repo,
deployment_id: deployment.id,
state: 'inactive'
});
return github.rest.repos.deleteDeployment({
owner: context.repo.owner,
repo: context.repo.repo,
deployment_id: deployment.id
});
})
);
Same: would love to be able to disable with a yaml property like - autodeploy: false
.
Temporary Workaround: use github's api in actions to delete all deployments which match our sha. We can use the github-script action., which lets us use octokit.js in the action Our job will need the
deployments: write
permission.- name: Delete Previous deployments uses: actions/github-script@v6 with: script: | const deployments = await github.rest.repos.listDeployments({ owner: context.repo.owner, repo: context.repo.repo, sha: context.sha }); await Promise.all( deployments.data.map(async (deployment) => { # we can only delete inactive deployments, so let's deactivate them first await github.rest.repos.createDeploymentStatus({ owner: context.repo.owner, repo: context.repo.repo, deployment_id: deployment.id, state: 'inactive' }); return github.rest.repos.deleteDeployment({ owner: context.repo.owner, repo: context.repo.repo, deployment_id: deployment.id }); }) );
Works nicely, but the following line needs to be removed for it to work:
# we can only delete inactive deployments, so let's deactivate them first
This would be great, my pull requests currently look like this:
We use environments for builds etc as well (secrets), so it becomes a mess very quickly. Being able to specify the environment at the top level (before the jobs) might also help a bit, but ideally it would be possible to do something like this in the job definition:
environment:
name: dev
url: https://github.com
deployment: false
@jameslounds what about creating an Action for it in the Marketplace?
We already have a ton of actions for deployment creation and status update. It is not a necessary to create a new one. But my issue now is when I use a custom deployment actions for more control of deployment status, I have a duplicated deployment in history. Having one of these would help:
- A variable in Github Context with
deployment_id
, so actions can check thatenvironment
was used in configuration and reuse existingdeployment_id
instead of creating a new one. - Option to disable automatic deployment report when
environment
is defined in a job
Just FYI if you're using @jameslounds workaround for workflows that run for pull requests, you need to modify the script, since the GITHUB_SHA
is the last merge commit not the head commit that just got pushed to the PR branch (which is what the deployments are created with). You can add the github.event.pull_request.head.sha
as an environment variable to the action:
delete_github_deployments:
runs-on: ubuntu-latest
needs: run_tests
if: ${{ always() }}
steps:
- name: Delete Previous deployments
uses: actions/github-script@v6
env:
GITHUB_SHA_HEAD: ${{ github.event.pull_request.head.sha }}
with:
script: |
const { GITHUB_SHA_HEAD } = process.env
const deployments = await github.rest.repos.listDeployments({
owner: context.repo.owner,
repo: context.repo.repo,
sha: GITHUB_SHA_HEAD
});
await Promise.all(
deployments.data.map(async (deployment) => {
await github.rest.repos.createDeploymentStatus({
owner: context.repo.owner,
repo: context.repo.repo,
deployment_id: deployment.id,
state: 'inactive'
});
return github.rest.repos.deleteDeployment({
owner: context.repo.owner,
repo: context.repo.repo,
deployment_id: deployment.id
});
})
);
We had to add this as a job that ran at the end of the workflow instead of the last step of other jobs because the deployment didn't seem to always be deleted in that case. So you still see the message on the PR's till the whole pipeline runs which can still be confusing for folks but seems like the best you can do for now.
@nagibyro ~~Many thanks! So if I have 4 .yaml files, I add this to their end?~~
Yes! But probably just putting it on the longest one would do it. Maybe there is a way to create a workflow that runs after all of them?
I ran into a similar version of this where our deploy workflows are manually triggered (either via UI or GH CLI) and we supply a specific release ref to deploy; the deployments created by the workflow show the current state of main as the deployed ref, when in fact they should show the ref of the specified release.
I made this Action to handle our case, but feel free to fork it for your own purposes; hope it helps others until this gets sorted
This action worked for me. It optionally deletes deployments and environments. You decide what to delete. strumwolf/delete-deployment-environment
Any comments from the GitHub side regarding this issue? It has been reported 9 months ago.
Workarounds presented here work but they are well... workaround and I feel like this should be implemented by the GitHub Actions itself. There are several scenarios where workflow may target environment but is not a real deployment, like running Terraform plan or running integration tests as mentioned before.
Same here, any answer for Enterprise customer?
Have the same problem. Also, on GHES. Would be great to have any updates on this. Thanks.
Same problem here. We are running automated selenium tests with stage-dependent credentials and these "fake deployments" really mess things up as they are not only displayed for PRs, but also in our Jira integration
I'm having the same problem with the Jira integration, each Job in the Workflow is treated as a different deployment.
+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not
+1, our workflows to bring down our infrastructure act as deployments due to this.
Our account is bloated with ephemeral environments/deployment simply because we want to use some environment variables as a setup.
Please add a feature to remove these deploys. if not it would be great to auto-delete them after the PR is merged.
+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not
This is my exact issue as well. I run "Terraform Plan" for multiple environments on PRs, and since they all count as deployments, it pollutes the PR.
+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not
This is my exact issue as well. I run "Terraform Plan" for multiple environments on PRs, and since they all count as deployments, it pollutes the PR.
Same here! Things get even more annoying, when you've configured deployment approvals, too. In these situations you also have to approve each and every job which uses the environment, but isn't really deploying anything...
Therefore: +1 for the deployment: false
-property or something like this. Or even better, follow Gitlab's approach: https://docs.gitlab.com/ee/ci/yaml/#environmentaction
+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not
Exactly!
https://github.com/orgs/community/discussions/36919#discussioncomment-6852220 see also this
I have this same issue with a workflow that runs Terraform Plan.
We used the script suggested here (using the GH API) to manually delete the environments periodically. It's really bad this isn't available as an option built into the GH repo...
We are getting great value out of the environment specific environment variables for our frontend tests/builds, and are also continuously being confused about the deployment statuses. Is there a way perhaps to indicate the context for the job, so we can access the right vars, without using the environment
key in it's definition?
It's currently basically impossible to have conversations on PRs because every conversation is completely flooded with this:
At the very least, runs of consecutive identical deployments should be automatically collapsed.
I feel like those 2 issues are related and would benefit from this one. https://github.com/orgs/community/discussions/67727
https://github.com/orgs/community/discussions/67728
Would be great to have any updates on this.