Changes to deploy output?
Summary
Have there been recent changes to the deploy console output again?
Since the most recent build the output is much less clear than it was in the previous version.
Before - clear output that says how many components and tests:
After - output that probably looks nice and pretty in an interactive terminal but is much worse in CI output (where deploy is most commonly run):
Version: 2.63.7
Hello @gdman :wave: It looks like you didn't include the full Salesforce CLI version information in your issue.
Please provide the output of version --verbose --json for the CLI you're using (sf or sfdx).
A few more things to check:
- Make sure you've provided detailed steps to reproduce your issue.
- A repository that clearly demonstrates the bug is ideal.
- Make sure you've installed the latest version of Salesforce CLI. (docs)
- Better yet, try the
rcornightlyversions. (docs)
- Better yet, try the
- Try running the
doctorcommand to diagnose common issues. - Search GitHub for existing related issues.
Thank you!
Thank you for filing this issue. We appreciate your feedback and will review the issue as soon as possible. Remember, however, that GitHub isn't a mechanism for receiving support under any agreement or SLA. If you require immediate assistance, contact Salesforce Customer Support.
I'd like to also add the feedback that while this change is nice in a normal terminal's output, it makes my CI output completely unreadable and the page is a mile long. I was hoping that SF_USE_PROGRESS_BAR would help, but it seems that env var does nothing now.
This issue has been linked to a new work item: W-17060727
@smithmd and @gdman - Thank you for your feedback. We're working on a fix now.
Once fixed, the CI output will look like this:
► Deploying Metadata...
Components: 21/30 (70%)
Components: 28/30 (93%)
Components: 29/30 (97%)
Components: 30/30 (100%)
► Running Tests...
Tests: 4/1377 (0%)
Tests: 8/1377 (1%)
Tests: 70/1377 (5%)
And will follow these rules:
- Polling for status updates will occur every five seconds
- This interval can be modified via an ENV variable.
- Updates, like
Components: 21/30 (70%), will only print when information changes, preventing long pages of output. - If five minutes pass with no updates, the last update will be printed again.
- This serves as a "heartbeat" for folks with "no output timeout" configurations in their CI system. See #1126 for additional context.
- This "heartbeat" interval can be modified via an ENV variable.
Feedback or questions?
Please let us know if you think the proposed fixes will work or if we've missed anything.
Thank you!
That sounds great to me. Thanks for the fast response!
@VivekMChawla
Feedback or questions?
Please let us know if you think the proposed fixes will work or if we've missed anything.
Thank you!
I had one thought overnight. How are component/test failures handled? Is there immediate feedback in the build output or does the user need to wait until the end?
There have been a few iterations of deploy logging over the years, some have provided instant logging of a component or test failure, whereas others say there is an error but not what it is or nothing until the end. Deploy processes can be long - test runs can take hours, the sooner the developer knows about any failure the better. For context, the developers don't necessarily have access to the org where a build is being validated so the only thing they have to go on is the logs.
@VivekMChawla Is it possible to keep the old output as default behaviour, but introduce some switcher for guys who want this "better" more detail output? With such changes to default behaviour of sf cli (not just output formatting), you are making cli less stable and predictable with new releases.
Regarding this particular change, after upgrading to latest cli version I got ~2 million records of output in my CI/CD logs for metadata retrieve, validate and deploy jobs. In some cases browser tab is simply crashing when I try to open jobs with such big logs.
Moreover, we've found that some of our pretty heavy metadata retrieve jobs for big orgs are simply crashing due to heap out of memory errors - I got all this trouble just becuase of this "better" output, I've not been asking for)
+1, we've also unexpectedly ended up with JavaScript heap out of memory. Such changes should be introduced as optional beta features and tested out thoroughly.
@avesolovksyy - The fix described in my previous message will result in fewer output lines in CI/CD logs. I apologize for the manner in which this feature was delivered. The fix should be in the nightly build early next week.
We are also investigating the root cause of the out of memory issue and will address that as well.
@pgumiela-wbd - Regarding your JavaScript heap out of memory issue, can you share additional details, specifically
- Which command (
deployorretrieve) was running? - Was this in a CI environment or an interactive terminal?
Getting out a fix for this is my top priority. Thank you both for your patience.
Hello @VivekMChawla , we also started to experience JavaScript heap out of memory issue after this change in output format.
Command: sf project deploy start --dry-run -w 180
Environment: CI (Azure pipelines)
Single output is pretty long and it is repeated each 5 seconds.
@janmaj-ibm @pgumiela-wbd @avesolovksyy @gdman @smithmd
Apologies for the inconvenience this caused you.
We decided to downgrade @salesforce/plugin-deploy-retrieve to 3.13.1 (the last version before these changes were introduced). Updating to 2.63.9 will get you back to the output you're used to.
Going forward, I have two PRs that address the issues/concerns you all brought up.
https://github.com/oclif/multi-stage-output/pull/52 will implement the changes that @VivekMChawla described in this comment.
https://github.com/oclif/table/pull/32 will address the out of memory issue that a couple of you are experiencing.
Those changes will go into our nightly build as soon as they're reviewed and tested, which will likely happen in the next couple of days. I will post back here once they're available in nightly so you can test it out if you'd like.
yup, we are also experiencing the heap limit issue in CI now as well
The latest nightly (2.65.8) now has all the changes we described above
We hit an issue with some code which called out to sf like so, as a kind of "fire and forget" thing (we do not read / flush the STDOUT buffers).
execSync(`sf project deploy start --source-dir ${tempDeployDir}`);
We didn't even do anything with the output from the process, just check the exit code. Presumably because the code was run from a tty, the timer was updating very frequently under the assumption someone would immediately see the clock updating.
As I said previously, we never used the output from the command, and so eventually the buffer just overflows, as there was a huge amount of data being sent via STDOUT which eventually crashed out our code like so (I have truncated the log):
Error: Command failed: sf project deploy start --source-dir /var/folders/wn/g33972tn78g_53vkvvfmwn9c0000gq/T/deploySTzVh8
at genericNodeError (node:internal/errors:984:15)
at wrappedFn (node:internal/errors:538:14)
at checkExecSyncError (node:child_process:890:11)
at execSync (node:child_process:962:15)
at updateHistoryTracking (/Users/[email protected]/src/cpm/bin/updateHistoryTracking.js:142:5)
at Object.<anonymous> (/Users/[email protected]/src/cpm/bin/updateHistoryTracking.js:158:2)
at Module._compile (node:internal/modules/cjs/loader:1369:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1427:10)
at Module.load (node:internal/modules/cjs/loader:1206:32)
at Module._load (node:internal/modules/cjs/loader:1022:12) {
status: 1,
signal: null,
output: [
null,
'\n' +
' ─────────────── Deploying Metadata ───────────────\n' +
'\n' +
'\n' +
' ◼ Preparing\n' +
' ◼ Waiting for the org to respond\n' +
' ◼ Deploying Metadata\n' +
' ◼ Running Tests\n' +
' ◼ Updating Source Tracking\n' +
' ◼ Done\n' +
'\n' +
' Status: ◜\n' +
' Elapsed Time: 0ms\n' +
'\n' +
'\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[G\n' +
' ─────────────── Deploying Metadata ───────────────\n' +
'\n' +
' Deploying v61.0 metadata to [email protected] using the v62.0 SOAP\n' +
' API.\n' +
'\n' +
' ⣾ Preparing 17ms\n' +
' ◼ Waiting for the org to respond\n' +
' ◼ Deploying Metadata\n' +
' ◼ Running Tests\n' +
' ◼ Updating Source Tracking\n' +
' ◼ Done\n' +
'\n' +
' Status: ◜\n' +
' Deploy ID: 0AfD600001ZhBLzKAN\n' +
' Target Org: [email protected]\n' +
' Elapsed Time: 36ms\n' +
'\n' +
'\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[G\n' +
' ─────────────── Deploying Metadata ───────────────\n' +
'\n' +
' Deploying v61.0 metadata to [email protected] using the v62.0 SOAP\n' +
' API.\n' +
'\n' +
' ⣾ Preparing 39ms\n' +
' ◼ Waiting for the org to respond\n' +
' ◼ Deploying Metadata\n' +
' ◼ Running Tests\n' +
' ◼ Updating Source Tracking\n' +
' ◼ Done\n' +
'\n' +
' Status: ◜\n' +
' Deploy ID: 0AfD600001ZhBLzKAN\n' +
' Target Org: [email protected]\n' +
' Elapsed Time: 65ms\n' +
'\n' +
'\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[1A\x1B[2K\x1B[G\n' +
<SNIP>
@geoffswift - Thank you for sharing the output of your logs. What you shared is extremely helpful.
Two quick questions:
- Is the script you're calling
execSync()from running on someone's local machine or in a CI environment? - Would it be acceptable to add the
--jsonflag to yourdeploy startcommand? That should stop the human-readable output from clogging up the STDIO buffer
FYI: @mdonnalley
The problem I reported was indeed on someone's local machine. I did use --json as the workaround to solve this issue.
closing this, we fixed the memory issue a few days after it was reported. If you have any other idea/suggestion to improve the deploy/retrieve output feel free to post here: https://github.com/forcedotcom/cli/discussions/categories/ideas
In case anybody's looking for this:
To test the command's non-interactive output locally, set the environment variable CI to true
Example:
CI=true sf project deploy start
Source: https://github.com/oclif/multi-stage-output/blob/c6b6c8ed94c4288a47b36d134a31e8c8fead558b/src/multi-stage-output.tsx#L34-L44