yarn
yarn copied to clipboard
Error: ENOENT: no such file or directory, uv_resident_set_memory
Do you want to request a feature or report a bug? bug
What is the current behavior?
Error is thrown when using any yarn cli command e.g. yarn init
Error: ENOENT: no such file or directory, uv_resident_set_memory
at process.memoryUsage (internal/process/per_thread.js:142:5)
at ConsoleReporter.checkPeakMemory (/www/htdocs/w00a6d73/.usr/nodejs/lib/node_modules/yarn/lib/cli.js:33077:40)
at ConsoleReporter.initPeakMemoryCounter (/www/htdocs/w00a6d73/.usr/nodejs/lib/node_modules/yarn/lib/cli.js:33068:10)
at /www/htdocs/w00a6d73/.usr/nodejs/lib/node_modules/yarn/lib/cli.js:91524:14
at Generator.next (<anonymous>)
at step (/www/htdocs/w00a6d73/.usr/nodejs/lib/node_modules/yarn/lib/cli.js:304:30)
at /www/htdocs/w00a6d73/.usr/nodejs/lib/node_modules/yarn/lib/cli.js:322:14
at new Promise (<anonymous>)
at new F (/www/htdocs/w00a6d73/.usr/nodejs/lib/node_modules/yarn/lib/cli.js:5228:28)
at /www/htdocs/w00a6d73/.usr/nodejs/lib/node_modules/yarn/lib/cli.js:301:12
If the current behavior is a bug, please provide the steps to reproduce.
The origin of this bug seems different to other ENOENT i saw in the list of issues.
I already did some research for this behaviour. This error occurs on linux systems where the /proc/self/stat is not readable.
The bug is not really a problem of yarn then of the libuv library, which node is using. https://github.com/nodejs/node-v0.x-archive/issues/10426
So any memoryUsage use will produce the same error:
const process = require('process');
process.memoryUsage();
This happens in base-reporter.js
checkPeakMemory() {
const {heapTotal} = process.memoryUsage();
if (heapTotal > this.peakMemory) {
this.peakMemory = heapTotal;
}
}
Is it possible to catch this exception and set any default value for peakMemory or try to get this information from another api?
What is the expected behavior?
Yarn should also work on restricted linux system where /proc/self/stat is not readable. npm is working here, but of course i don't want to switch back to npm
Please mention your node.js, yarn and operating system version. I tested this bug for following versions [email protected] [email protected] [email protected] [email protected]
I'm finding this on Monday mornings if I leave react-scripts running over the weekend.
yarn @ 1.17.3 node @ 10.15.3
I get the same error on Windows Subsystem for Linux
Here is my workaround for the WSL - https://github.com/rumbu/yarn/commit/c8ed1600ef08cab2059cb3a25c754b0ee95082c4
Have not tested that particular code. Just patched my globally installed yarn, in lib/cli.js on line 33565 and that worked.
BaseReporter.prototype.checkPeakMemory = function checkPeakMemory() {
var heapTotal;
try {
var _process$memoryUsage = process.memoryUsage();
heapTotal = _process$memoryUsage.heapTotal;
} catch(e) {
heapTotal = 1000000;
}
I can make a PR for this, but I doubt that it will be merged, since they say that yarn 1.x is frozen now. The good news is that I can't find any calls to process.memoryUsage in the new repo: https://github.com/yarnpkg/berry/search?q=memoryUsage&type=code
For anyone who comes across this, I was hitting this issue due to an apparmor profile applied to /usr/bin/node (rather than a specific nodejs application)
Grrr, I have just had the same problem when setting up chroot on my brand new (hosted) server which came with the following versions pre-installed:
node @ v20.8.0 yarn @ 1.22.19
@rumbu 's commit fixes it. This thread has been going for 4 years and yarn version 1 is still being installed on production systems. It doesn't look like that is going to change any time soon. If there is another 1.x release can we get this in. I really don't want to patch every chrooted yarn installation for another 4 years!
UPDATE: I ditched plesk in the end, it was creating more problems than it was solving. Afterwards, I could install anything I liked in the confidence it wasn't going to interfer with plesk. So I'm on v2 now and don't care about v1 ;) Thanks for the tip jeffmahoney.
I ran across this issue too. The workaround by @rumbu is one way to fix this. The root cause in chroot environments is that the source for memory usage is procfs, which won't be available in a chroot unless the user does it themselves, like so: mount --bind /proc <chroot dir>/proc