Issues icon indicating copy to clipboard operation
Issues copied to clipboard

Auto-deploy task logs can grow to a size that causes Octopus to timeout when trying to load the task log

Open hnrkndrssn opened this issue 5 months ago • 0 comments

Severity

Blocking customers that do a lot of auto-deployments, though the workaround is not great, it's all we can do for now

Version

Reported in 2025.2, but would've been there since the introduction of the auto-deploy feature

Latest Version

I could reproduce the problem in the latest build

What happened?

We use auto-triggered deploys for many of our services deployed to EC2's in AWS. As hosts scale out, we use the auto triggers to start deployments of the latest live release to those hosts.

In a few cases, we have some services that are deployed to all hosts in an env, including some queuing retry services and some common assets for UI deployments. As you would expect, there are a large amount of triggered deployments for these apps.

The issue we run into is: as these deployments happen, the log grows quite large, as octopus appends the auto triggered deploy log onto the end of the last 'manual' deploy. Since some of these services are deployed to every host we have, these logs quickly grow huge...and in that case, they become so large that it actually causes the octo deploy to time out or take so long that our automatic boot strapping processes time out.

Reproduction

  1. Create a release and manually deploy it
  2. Configure auto-deployments and trigger lots, and lots of host scale outs so that the deployment is auto-deployed to these hosts until the log file grows too big for memory and Octopus times out.

Error and Stacktrace


More Information

No response

Workaround

Manually create and deploy a new release occasionally to prevent the log file from growing so large Octopus times out when loading the task log.

hnrkndrssn avatar Aug 04 '25 01:08 hnrkndrssn