Array buffer allocation failed during ember test
I recently updated ember-qunit to latest which also required me to install qunit and @ember/test-helpers manually. I noticed that when I would run my ~7000 unit tests that I would run into an error like this:
not ok 1889 Chrome 92.0 - [1221 ms] - Unit | Component | <myComponent>
---
actual: >
null
stack: >
RangeError: Array buffer allocation failed
at new ArrayBuffer (<anonymous>)
at new Int32Array (<anonymous>)
at new HeapImpl (http://localhost:9001/assets/vendor.js:70039:19)
at artifacts (http://localhost:9001/assets/vendor.js:70301:13)
at new Renderer (http://localhost:9001/assets/vendor.js:35061:52)
at new InteractiveRenderer (http://localhost:9001/assets/vendor.js:35316:3)
at Function.create (http://localhost:9001/assets/vendor.js:35325:14)
at FactoryManager.create (http://localhost:9001/assets/vendor.js:25093:25)
at Proxy.create (http://localhost:9001/assets/vendor.js:24811:20)
at instantiateFactory (http://localhost:9001/assets/vendor.js:24912:71)
To be honest, not sure where to go from here. Any advice would be appreciated.
ember-cli: 3.24.0
node: 14.17.3
nom: 6.14.13
Note: I opened an issue on the ember-cli repo as well -- not sure best place to put this.
What version of Ember are you using?
What version of Ember are you using?
3.24.4
That error looks like glimmer-vm is attempting to create its heap (which it does via a typed array), and that is throwing an error. For context, here is the implementation of HeapImpl:
https://github.com/glimmerjs/glimmer-vm/blob/4f1bef0d9a8a3c3ebd934c5b6e09de4c5f6e4468/packages/%40glimmer/program/lib/program.ts#L77
When does this error happen? Is it the first test, or somewhere in the middle of your 7k test suite? What is the memory consumed at the time the error happens?
It occurs usually after around ~1500 tests have completed.
How should I calculate the memory in order to show you?
Something else worth noting, if I use ember-exam and use --split 4 --parallel, they pass just fine.
How should I calculate the memory in order to show you?
I was thinking just using Chrome's Window -> Task Manager functionality to see what the tab is using.
Sure! It's also worth noting this error occurs either in the browser (via /tests) or in the command line (via ember test).
By the time the errors start to show up, I'm seeing ~900MB memory usage in the tab and it failed around test 1800.
Some more data:
We have 7385 tests, ~1000 integration tests, 10 application, and the rest unit.
Turns out that the entire test suite runs just fine on Firefox.
But I can confirm @justinzelinsky numbers. In latest Chrome it crashes right after 1800 tests have run at a memory allocation of ~900MB.
Is the memory growing and eventually crashes at ~ 900, or is it fairly steady?
The thing I'm trying to figure out here is if this error is happening due to a memory leak throughout the test suite...
The memory is definitely growing and eventually crashes at ~900MB.
Would a memory leak somewhere in the test suite also explain why Firefox has no issue running the entire suite?
Firefox memory consumption maxes out at 50MB somewhere during the first 1000 tests (mostly application and rendering tests) and then drops just below 10MB for the rest of the suite (unit tests).
Would a memory leak somewhere in the test suite also explain why Firefox has no issue running the entire suite?
Not sure to be honest. It doesn't obviously rule it out though (it's totally possible that FireFox will support more overall memory used by a tab before preventing it). What is the memory used by FireFox? Does it also continue to grow (even past the ~900MB level)?
Haha, you beat me to the question!
Is it as easy as saying if we had a memory leak it would leak in any browser?
I'm just chiming in to confirm that we're also getting this on a ~10K tests suite, after upgrading from 3.4 to 3.24. I'll do the same memory analysis and reply back with some numbers.
However, we get this when running with ember-exam. I'm going to try to run it straight in the browser without ember-exam to see if there's the same result.
In the browser the error started appearing around 700MB of memory used by the tab, after it had been growing steadily. This was somewhere around 800-900 tests (I forgot to check, but I don't think it matters that much). I wanted to start the console to profile the tab, but it crashed after opening the tab.
It definitely looks like a memory leak. I'll try doing a memory profile to see if I can pinpoint the root cause.
(sorry for the spam!) A first look at the memory profiles reveals that the _applicationInstances Set in the App object is steadily growing.
First memory snapshot:

Second memory snapshot:

Digging in the retainers for a random ArrayBuffer object:


Is there something we might be doing wrong in our test suites that's causing the App object to retain all these instances? Or maybe something we should be doing in a hook to tear them down?
Thanks for the details, much appreciated. I would love to know the answers to your questions :)
(later edit to avoid spam) Here is a summary of all the scenarios I've ran and the results:
| Environment | Browser | Splits? | Results |
|---|---|---|---|
| Browser | Chrome | No | Crashes when the memory reaches about 700MB, after ~900 tests. |
| Browser | Firefox | No | Runs fine, the memory hovered around 50MB for more than 1K tests and doesn't seem to keep increasing. |
| CLI | Chrome | 2 | Crashes |
| CLI | Chrome | 4 | Succeeds |
| CLI | Chrome | 8 | Succeeds |
| CLI | Firefox | 2 | Succeeds |
As you can see, Firefox has no issues running the suite, while Chrome is definitely limited by the memory, and splitting the test suite in enough partitions allows a single instance to finish running tests, but as we add more tests we might need to increase the split count.
It looks very similar to something we had on a project having a lot of acceptance tests (800+), in the CI Chrome was going out of memory when running tests, in our case we were using sinonjs,, and in some of our tests we were not restoring stubs, leading to the app instance being kept, memory increase and crash. If your project use sinonjs, you may want to take a look to ember-sinon-qunit or to manually verify that each of your stub are restored at the end of each test (see sinonjs doc about this)
Just made a test on our project, by commenting setupSinon() (which automatically restore all stubs after a test) in tests/test-helper.js
- on Chrome, for 80 acceptance tests run, snapshot is 3x bigger
- on Firefox, for 80 acceptance tests run, snapshot is the same
If you are not using sinon, maybe you could try to use the setupTestIsolationValidation option to identify possible leaking tests?
Could also helps to know if, before the update to latest version of ember-qunit you also had memory increase on Chrome (but maybe it was less "significative", so it was not taking enough memory to generate this error?)
Edit: in fact we have same issue on another app, using [email protected], if we use last version of ember-qunit it leads to memory growing (where with version 4.6.0 all's working fine)
For example
- 307 unit tests
- [email protected] -> snapshot = 185mb
- [email protected] -> snapshot = 186mb
- 358 integration tests
- [email protected] -> snapshot = 158mb
- [email protected] -> snapshot 1558mb ⚠️
@ndekeister-us tanks for your input, much appreciated. I'll most likely start to attack the memory issue by converting to ember-sinon-qunit, which is overdue for us anyway.
We were getting this same issue after upgrading Ember from 3.16 to 3.24. In the same PR we also upgraded ember-qunit from 4.6 to 5.1.1, in the end we managed to resolve by reverting the ember-qunit upgrade.
For us the situation was not resolved by downgrading. We were originally using v5.1.4 and [email protected]. However, after a test-helpers upgrade to 2.4.0 the memory snapshot retainers changed a bit, and they were no longer in _applicationInstances, but rather in another area in the context of setup-rendering-context.js.
I tried downgrading to 4.6.0, but the error is still popping out, with the memory used increasing steadily. Furthermore, trying to take a memory snapshot now crashes at all times, at a specific part of the recording process. There's probably some kind of reference that's crashing the tab/console.
Having the same issue with the downgraded version as well makes me think it's most likely something on our end. We are not using Sinon, but testdouble, a similar library. While I'm sure we're properly tearing down the stubs, I'm not ruling out that there is something else in the our or the library code that's retaining this memory.
TLDR: There seems to be an issue with WeakMaps in Chrome (starting with 92.0.4515.107-1), and adding some extra testing teardowns or using an older Chrome version to run the suite no longer triggers this error in our case.
Long version:
After some further investigation with v5.1.4, in our case it seems this is somehow related to WeakMaps in Chrome. This might be caused by slow GC, not necessarily a memory leak, but I'm not yet sure.
The biggest retainer I've encountered is DESTROYABLES_META, which retains references to basically any destroyable, as seen below:

I added the following code in test-setup.js to handle this:
QUnit.testStart(function () {
enableDestroyableTracking();
assertDestroyablesDestroyed();
});
This effectively resets the DESTROYABLES_META WeakMap on each test start.
After this change the tab no longer goes over 500MB of memory usage, but at some point I still start getting the allocation failed error.
Some further analysis reveals that the -top-level-view:main entity registered by @ember/test-helpers is also retained in a WeakMap, as seen below:

I added the following code in test-setup.js to try to teardown this as well:
const application = Application.create(config.APP);
QUnit.testDone(function () {
application._applicationInstances.forEach((appInstance) => {
appInstance.unregister('-top-level-view:main');
});
});
But unfortunately this is done too late, as at this point the instances array is already empty. Instead I wrote a wrapper for setupRenderingTest that adds the following teardown:
hooks.afterEach(function () {
this.owner.unregister('-top-level-view:main');
});
And this finally does the trick, but it is still a workaround, and doesn't explain or handle the underlying issue.
My colleague @cristi-badila suggested to try an older Chrome version to run the suite (we tried 81.0.4044.92-1, Ubuntu 64bit, from Feb 2020 at first), and sure enough the error does not pop up with this version, which further confirms that this is indeed a Chrome issue.
I then started bisecting the versions (with 94 being the latest version) to figure out where the issue started occurring, as follows:
87.0.4280.66-1=> passes91.0.4472.77-1=> passes93.0.4577.63-1=> fails92.0.4515.107-1=> fails91.0.4472.164-1=> passes
It is now pretty obvious that for our suite at least, the Chrome version makes the difference, and 92.0.4515.107-1 is the first one that caused this change. I am not sure if this is a bug in Chrome, and how to proceed on this issue.
@monovertex thanks for sharing your findings in such great detail. It's very impressive and much appreciated.
This seems to confirm our earlier inclination that this is a bug in Chrome since our test suite runs just fine in Firefox.
Anyone made any further progress here? We are experiencing the same problem in Edge/Chrome. Unfortunately enableDestroyableTracking(); assertDestroyablesDestroyed() and unregister didn't help so much. We are already on Ember 4, simply downgrading ember-qunit breaks the tests completely and also doesn't seem like a nice solution.