azure-xplat-cli
azure-xplat-cli copied to clipboard
Upload of WebJob fails with Azure CLI (process out of memory)
We are trying to use the Azure CLI on linux (EDIT: The CLI fails on Windows as well) to upload a WebJob as part of our continuous deployment pipeline.
azure site job upload -v $WEB_JOB_NAME $WEB_JOB_TYPE run.zip $WEB_SITE_NAME
But the command fails after > 20 mins of waiting on the "Uploading WebJob" step.
FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory
Some more info:
- The cli is properly authenticated. We can trigger already existing WebJobs just fine.
- The exact same run.zip uploads successfully from Microsoft Azure Powershell on Windows.
- The zip-file contains a runnable jar, and a small .cmd-script to start it. File size: 30 MB
- We tried setting the verbose-flag, but it does not give any more information.
EDIT:
Noticed that nodejs on my server was heavily outdated. Tried upgrading, but it still fails. However, now we got more error log:
- Uploading new WebJob
<--- Last few GCs --->
48346 ms: Scavenge 1400.1 (1449.2) -> 1400.1 (1449.2) MB, 3.6 / 0 ms (+ 3.6 ms in 1 steps since last GC) [allocation failure] [incremental marking delaying mark-sweep].
49061 ms: Mark-sweep 1400.1 (1449.2) -> 1399.5 (1448.2) MB, 714.8 / 0 ms (+ 37.2 ms in 2 steps since start of marking, biggest step 33.6 ms) [last resort gc].
49980 ms: Mark-sweep 1399.5 (1448.2) -> 1398.2 (1449.2) MB, 919.2 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0xbdfe837339 <JS Object>
1: keys [native v8natives.js:182] [pc=0x2d3aadca1cc4] (this=0xbdfe836b61 <JS Function Object (SharedFunctionInfo 0xbdfe836ad1)>,K=0xbdfe8a16b1 <an Uint8Array with map 0x25bf32151d09>)
2: stringifyObject(aka stringifyObject) [/usr/local/lib/node_modules/azure-cli/node_modules/eyes/lib/eyes.js:176] [pc=0x2d3aadf9074a] (this=0xbdfe804131 <undefined>,obj=0xbdfe8a16b1 <an Uint8Array with map 0...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
programdatasender/src/main/bin/packandeploy.sh: line 25: 13358 Aborted (core dumped) azure site job upload $WEB_JOB_NAME $WEB_JOB_TYPE run.zip $WEB_SITE_NAME
~/repos/clean/nrk.recommendations
I am getting the same error when downloading a large file through the node.js interface.
<--- Last few GCs --->
761431 ms: Mark-sweep 1394.8 (1445.7) -> 1394.7 (1445.7) MB, 442.9 / 0 ms [allocation failure] [GC in old space requested].
761869 ms: Mark-sweep 1394.7 (1445.7) -> 1394.7 (1445.7) MB, 439.0 / 0 ms [allocation failure] [GC in old space requested].
762324 ms: Mark-sweep 1394.7 (1445.7) -> 1391.4 (1445.7) MB, 455.2 / 0 ms [last resort gc].
762760 ms: Mark-sweep 1391.4 (1445.7) -> 1394.7 (1445.7) MB, 434.8 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0000017DB40E3AD1 <JS Object>
1: emitNode(aka emitNode) [C:\Users\#######\node_modules\azure-storage\node_modules\xml2js\node_modules\sax\lib\sax.js:~592] [pc=0000023443B9414E] (this=0000017DB4004189 <undefined>,parser=0000012B3030C751 <an SAXParser with map 000003076F542481>,nodeType=0000012B30314519 <String[10]: onclosetag>,data=0000024FCBF93989 <String[16]: Content-Encoding>)
2...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Did you ever find a solution?
@delgadom No, I ended up using the REST Api. This is from my upload script:
# Workaround cause upload does not work from CLI
siteScmUrl="https://mywebsite.scm.azurewebsites.net/"
publishingUserName='user'
publishingPassword='password'
jobPath="api/triggeredwebjobs/$WEB_JOB_NAME"
fullUrl=$siteScmUrl$jobPath
# Upload the zip file using curl
curl -v -X PUT --data-binary @run.zip -u $publishingUserName:$publishingPassword -H 'Content-Type: application/zip' -H 'Content-Disposition: attachment; filename="run.cmd"' $fullUrl
Nb! I start the webjob with run.cmd, so you may have to remove that if your zip has something else.
This is likely an issue with the Azure CLI itself, and not a service side issue. Using the REST API is definitely the best course of action, for now, if you hit this issue.
Too bad no one from MSFT care about this after a year! I'm running into the same issue attempting to use the pre-built Azure CLI Docker image.
docker run microsoft/azure-cli azure storage blob list
after peeling a few layers of the onion, the workaround that I'm currently attempt to use is:
docker run microsoft/azure-cli node --max_old_space_size=8000000 /usr/bin/azure <command>
http://stackoverflow.com/questions/26094420/fatal-error-call-and-retry-last-allocation-failed-process-out-of-memory
That fixes the OOM error, but runs into a "invalid string length" error, so one step closer.
@ahmedelnably - Can you take a look at this issue?