runner icon indicating copy to clipboard operation
runner copied to clipboard

Using Environments without creating deployment automatically

Open LaurenzReitsam opened this issue 2 years ago • 49 comments

Current Situation: Every pipeline step that uses environments automatically creates a new deployment. This seems to be wanted behavior.

Problem Access to an environment might be also needed for other reasons than deployments. Like running integration tests (deployment already done; we want to assure correct behavior of latest deployment)

Possible Solution Can we add an option to avoid an automatic deployment always when using environment? An idea might be to set an environment variable like AUTO_DEPLOYMENT=false.

Additional information

LaurenzReitsam avatar Sep 14 '22 16:09 LaurenzReitsam

The exact same thing bugs me as well.

I would even go one step further and decouple the deployments from environments completely by default.

Imho every workflow should be able to be a deployment task - or not. Instead of using the environment to determine this, we could just have another string key in the yaml (similar to the "concurrency" key).

flobernd avatar Sep 14 '22 17:09 flobernd

I have the same problem. I want to limit deployments only on protected branches after the pull request has been merged, but I want to use environment secrets at the time of the pull request as well. Specifically, it is a case of executing terraform plan in a pull request and terraform apply in an action that runs on a protected branch after the pull request is merged. Currently, it is not possible to limit the deployment to a protected branch while sharing the environment secrets to actions on non-protected branches.

civitaspo avatar Oct 13 '22 02:10 civitaspo

Temporary Workaround: use github's api in actions to delete all deployments which match our sha. We can use the github-script action., which lets us use octokit.js in the action Our job will need the deployments: write permission.

- name: Delete Previous deployments
        uses: actions/github-script@v6
        with:
          script: |
            const deployments = await github.rest.repos.listDeployments({
              owner: context.repo.owner,
              repo: context.repo.repo,
              sha: context.sha
            });
            await Promise.all(
              deployments.data.map(async (deployment) => {
                # we can only delete inactive deployments, so let's deactivate them first
                await github.rest.repos.createDeploymentStatus({ 
                  owner: context.repo.owner, 
                  repo: context.repo.repo, 
                  deployment_id: deployment.id, 
                  state: 'inactive' 
                });
                return github.rest.repos.deleteDeployment({
                  owner: context.repo.owner,
                  repo: context.repo.repo,
                  deployment_id: deployment.id
                });
              })
            );

jameslounds avatar Dec 03 '22 16:12 jameslounds

Same: would love to be able to disable with a yaml property like - autodeploy: false.

tianhuil avatar Dec 15 '22 22:12 tianhuil

Temporary Workaround: use github's api in actions to delete all deployments which match our sha. We can use the github-script action., which lets us use octokit.js in the action Our job will need the deployments: write permission.

- name: Delete Previous deployments
        uses: actions/github-script@v6
        with:
          script: |
            const deployments = await github.rest.repos.listDeployments({
              owner: context.repo.owner,
              repo: context.repo.repo,
              sha: context.sha
            });
            await Promise.all(
              deployments.data.map(async (deployment) => {
                # we can only delete inactive deployments, so let's deactivate them first
                await github.rest.repos.createDeploymentStatus({ 
                  owner: context.repo.owner, 
                  repo: context.repo.repo, 
                  deployment_id: deployment.id, 
                  state: 'inactive' 
                });
                return github.rest.repos.deleteDeployment({
                  owner: context.repo.owner,
                  repo: context.repo.repo,
                  deployment_id: deployment.id
                });
              })
            );

Works nicely, but the following line needs to be removed for it to work:

# we can only delete inactive deployments, so let's deactivate them first

berkeli avatar Dec 27 '22 19:12 berkeli

This would be great, my pull requests currently look like this: image

We use environments for builds etc as well (secrets), so it becomes a mess very quickly. Being able to specify the environment at the top level (before the jobs) might also help a bit, but ideally it would be possible to do something like this in the job definition:

environment:
  name: dev
  url: https://github.com
  deployment: false

yusijs avatar Jan 23 '23 12:01 yusijs

@jameslounds what about creating an Action for it in the Marketplace?

ftzi avatar Jan 24 '23 21:01 ftzi

We already have a ton of actions for deployment creation and status update. It is not a necessary to create a new one. But my issue now is when I use a custom deployment actions for more control of deployment status, I have a duplicated deployment in history. Having one of these would help:

  • A variable in Github Context with deployment_id, so actions can check that environment was used in configuration and reuse existing deployment_id instead of creating a new one.
  • Option to disable automatic deployment report when environment is defined in a job

constgen avatar Jan 25 '23 17:01 constgen

Just FYI if you're using @jameslounds workaround for workflows that run for pull requests, you need to modify the script, since the GITHUB_SHA is the last merge commit not the head commit that just got pushed to the PR branch (which is what the deployments are created with). You can add the github.event.pull_request.head.sha as an environment variable to the action:

  delete_github_deployments:
    runs-on: ubuntu-latest
    needs: run_tests
    if: ${{ always() }}
    steps:
      - name: Delete Previous deployments
        uses: actions/github-script@v6
        env:
          GITHUB_SHA_HEAD: ${{ github.event.pull_request.head.sha }}
        with:
          script: |
            const { GITHUB_SHA_HEAD } = process.env
            const deployments = await github.rest.repos.listDeployments({
              owner: context.repo.owner,
              repo: context.repo.repo,
              sha: GITHUB_SHA_HEAD
            });
            await Promise.all(
              deployments.data.map(async (deployment) => {
                await github.rest.repos.createDeploymentStatus({ 
                  owner: context.repo.owner, 
                  repo: context.repo.repo, 
                  deployment_id: deployment.id, 
                  state: 'inactive' 
                });
                return github.rest.repos.deleteDeployment({
                  owner: context.repo.owner,
                  repo: context.repo.repo,
                  deployment_id: deployment.id
                });
              })
            );

We had to add this as a job that ran at the end of the workflow instead of the last step of other jobs because the deployment didn't seem to always be deleted in that case. So you still see the message on the PR's till the whole pipeline runs which can still be confusing for folks but seems like the best you can do for now.

nagibyro avatar Jan 27 '23 17:01 nagibyro

@nagibyro ~~Many thanks! So if I have 4 .yaml files, I add this to their end?~~

Yes! But probably just putting it on the longest one would do it. Maybe there is a way to create a workflow that runs after all of them?

ftzi avatar Jan 27 '23 17:01 ftzi

I ran into a similar version of this where our deploy workflows are manually triggered (either via UI or GH CLI) and we supply a specific release ref to deploy; the deployments created by the workflow show the current state of main as the deployed ref, when in fact they should show the ref of the specified release.

I made this Action to handle our case, but feel free to fork it for your own purposes; hope it helps others until this gets sorted

kylebjordahl avatar Feb 08 '23 23:02 kylebjordahl

This action worked for me. It optionally deletes deployments and environments. You decide what to delete. strumwolf/delete-deployment-environment

amos-kibet avatar Apr 06 '23 10:04 amos-kibet

Any comments from the GitHub side regarding this issue? It has been reported 9 months ago.

Workarounds presented here work but they are well... workaround and I feel like this should be implemented by the GitHub Actions itself. There are several scenarios where workflow may target environment but is not a real deployment, like running Terraform plan or running integration tests as mentioned before.

kamilzzz avatar May 06 '23 22:05 kamilzzz

Same here, any answer for Enterprise customer?

Hronom avatar May 29 '23 17:05 Hronom

Have the same problem. Also, on GHES. Would be great to have any updates on this. Thanks.

fabasoad avatar Jun 02 '23 04:06 fabasoad

Same problem here. We are running automated selenium tests with stage-dependent credentials and these "fake deployments" really mess things up as they are not only displayed for PRs, but also in our Jira integration

MichaelMHoff avatar Jun 08 '23 08:06 MichaelMHoff

I'm having the same problem with the Jira integration, each Job in the Workflow is treated as a different deployment.

danielsantiago avatar Jun 08 '23 14:06 danielsantiago

+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not

craig-king avatar Jun 12 '23 11:06 craig-king

+1, our workflows to bring down our infrastructure act as deployments due to this.

JonathanAtCenterEdge avatar Jul 05 '23 13:07 JonathanAtCenterEdge

Our account is bloated with ephemeral environments/deployment simply because we want to use some environment variables as a setup.

Please add a feature to remove these deploys. if not it would be great to auto-delete them after the PR is merged.

bombillazo avatar Jul 06 '23 04:07 bombillazo

+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not

This is my exact issue as well. I run "Terraform Plan" for multiple environments on PRs, and since they all count as deployments, it pollutes the PR.

hknutsen avatar Jul 08 '23 10:07 hknutsen

+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not

This is my exact issue as well. I run "Terraform Plan" for multiple environments on PRs, and since they all count as deployments, it pollutes the PR.

Same here! Things get even more annoying, when you've configured deployment approvals, too. In these situations you also have to approve each and every job which uses the environment, but isn't really deploying anything...

Therefore: +1 for the deployment: false-property or something like this. Or even better, follow Gitlab's approach: https://docs.gitlab.com/ee/ci/yaml/#environmentaction

tinogo avatar Jul 24 '23 08:07 tinogo

+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not

Exactly!

AustinZhu avatar Aug 17 '23 18:08 AustinZhu

https://github.com/orgs/community/discussions/36919#discussioncomment-6852220 see also this

mkarbo avatar Aug 30 '23 10:08 mkarbo

I have this same issue with a workflow that runs Terraform Plan.

TrevorSmith-msr avatar Oct 26 '23 17:10 TrevorSmith-msr

We used the script suggested here (using the GH API) to manually delete the environments periodically. It's really bad this isn't available as an option built into the GH repo...

bombillazo avatar Oct 26 '23 19:10 bombillazo

We are getting great value out of the environment specific environment variables for our frontend tests/builds, and are also continuously being confused about the deployment statuses. Is there a way perhaps to indicate the context for the job, so we can access the right vars, without using the environment key in it's definition?

godd9170 avatar Nov 27 '23 17:11 godd9170

It's currently basically impossible to have conversations on PRs because every conversation is completely flooded with this:

image

At the very least, runs of consecutive identical deployments should be automatically collapsed.

stevage avatar Nov 29 '23 10:11 stevage

I feel like those 2 issues are related and would benefit from this one. https://github.com/orgs/community/discussions/67727

https://github.com/orgs/community/discussions/67728

mikocot avatar Jan 12 '24 15:01 mikocot

Would be great to have any updates on this.

johanmolen avatar Feb 07 '24 08:02 johanmolen