pylance-release
pylance-release copied to clipboard
Allow setting --max-old-space-size when using python.analysis.nodeExecutable
Environment data
- Language Server version: v2024.4.1
- OS and version: Windows 11 on remote ssh ubuntu 22.04
- Python version (& distribution if applicable, e.g. Anaconda): conda 3.11.6
There is a similar setting combination in vscode typescript server
"typescript.tsserver.maxTsServerMemory": 7000, "typescript.tsserver.nodePath": "/usr/bin/node",
Since the node binary shipped by vscode server have limited on 4GB memory (mentioned #4121) , and can be confirmed if creating pyrightconfig.json with
{
"verboseOutput": true
}
I know if you set the python.analysis.nodeExecutable , vscode server instead spawn something like this:
/usr/bin/node --max-old-space-size=8192 /home/<username>/.vscode-server/extensions/ms-python.vscode-pylance-2024.4.1/dist/server.bundle.js -- --clientProcessId=86565 --cancellationReceive=file:4bbdc9682bf51899139738844718b8c238a33ab9a4 --stdio
but it spawns with fixed memory size of 8192.
if you use the default one shipped by vscode:
/home/<username>/.vscode-server/cli/servers/Stable-e170252f762678dec6ca2cc69aba1570769a5d39/server/node /home/<username>/.vscode-server/extensions/ms-python.vscode-pylance-2024.4.1/dist/server.bundle.js --cancellationReceive=file:9d6d62c0930a910053bdeb0b73253fee3ee469b770 --node-ipc --clientProcessId=3626529
Sometimes the python.analysis.nodeExecutable just disappear in vscode settings(UI) menu, and when this occurs, settings(JSON) also tells Unknown Configuration Setting. However when this happens, if you also set NODE_OPTIONS=--max-old-space-size=<some int>, then vscode internal server actually launches node with this heap size. But it would still crash after memory usage more than 6~7G. (ever happened on v2024.4.0 v2024.4.1, rarely happens)
(notice the setting disappear in user settings, but appears in remote settings. Sometimes they're both missing.)
Thus I wonder if there is a setting to launch python.analysis.nodeExecutable with custom --max-old-space-size?
setting the env NODE_OPTIONS=--max-old-space-size=<some int> just doesn't work.
Just curious if I should modify
async createLanguageClient(_resource, _interpreter, clientOptions) or const serverProcess = cp.spawn(runtime, args, execOptions); under LanguageClient? I just didn't find where the extension.js reads python.analysis.nodeExecutable
Error log taken from before
[3896855:0x7efdf0000ff0] 96711 ms: Scavenge 6349.3 (6503.1) -> 6335.2 (6503.8) MB, 10.06 / 0.00 ms (average mu = 0.999, current mu = 0.999) allocation failure; [3896855:0x7efdf0000ff0] 96743 ms: Scavenge 6351.0 (6504.8) -> 6337.3 (6506.6) MB, 10.10 / 0.00 ms (average mu = 0.999, current mu = 0.999) allocation failure; [3896855:0x7efdf0000ff0] 96776 ms: Scavenge 6353.5 (6507.3) -> 6339.8 (6508.8) MB, 10.04 / 0.00 ms (average mu = 0.999, current mu = 0.999) allocation failure;
<--- JS stacktrace --->
FATAL ERROR: MarkCompactCollector: young object promotion failed Allocation failed - JavaScript heap out of memory ----- Native stack trace -----
2024-04-09 13:02:40.871 [info] 1: 0xcd8bd6 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [/usr/bin/node]
2024-04-09 13:02:40.872 [info] 2: 0x10aed20 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/usr/bin/node]
2024-04-09 13:02:40.872 [info] 3: 0x10af007 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/usr/bin/node]
2024-04-09 13:02:40.873 [info] 4: 0x12cdfe5 [/usr/bin/node]
2024-04-09 13:02:40.873 [info] 5: 0x1301eae void v8::internal::LiveObjectVisitor::VisitMarkedObjectsNoFailv8::internal::EvacuateNewSpaceVisitor(v8::internal::Page*, v8::internal::EvacuateNewSpaceVisitor*) [/usr/bin/node]
2024-04-09 13:02:40.874 [info] 6: 0x1310faa v8::internal::Evacuator::RawEvacuatePage(v8::internal::MemoryChunk*) [/usr/bin/node]
2024-04-09 13:02:40.874 [info] 7: 0x1311462 v8::internal::Evacuator::EvacuatePage(v8::internal::MemoryChunk*) [/usr/bin/node]
2024-04-09 13:02:40.875 [info] 8: 0x131177f v8::internal::PageEvacuationJob::Run(v8::JobDelegate*) [/usr/bin/node]
2024-04-09 13:02:40.875 [info] 9: 0x1f91cdd v8::platform::DefaultJobState::Join() [/usr/bin/node]
2024-04-09 13:02:40.876 [info] 10: 0x1f922b3 v8::platform::DefaultJobHandle::Join() [/usr/bin/node]
2024-04-09 13:02:40.876 [info] 11: 0x130e815 v8::internal::MarkCompactCollector::EvacuatePagesInParallel() [/usr/bin/node]
2024-04-09 13:02:40.877 [info] 12: 0x131d5a0 v8::internal::MarkCompactCollector::Evacuate() [/usr/bin/node]
2024-04-09 13:02:40.877 [info] 13: 0x131defd v8::internal::MarkCompactCollector::CollectGarbage() [/usr/bin/node]
2024-04-09 13:02:40.878 [info] 14: 0x12e2c3e v8::internal::Heap::MarkCompact() [/usr/bin/node]
2024-04-09 13:02:40.878 [info] 15: 0x12e399d v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*) [/usr/bin/node]
2024-04-09 13:02:40.879 [info] 16: 0x12e4209 [/usr/bin/node]
2024-04-09 13:02:40.879 [info] 17: 0x12e4818 [/usr/bin/node]
2024-04-09 13:02:40.880 [info] 18: 0x1a34081 [/usr/bin/node]
Sorry but there is no current way to set the memory higher than 8GB. We can use this issue as enhancement request to allow that to be set.
Alternatively we could figure out why your workspace uses so much memory in the first place? Can you share your workspace/repo?
Sorry but there is no current way to set the memory higher than 8GB. We can use this issue as enhancement request to allow that to be set.
Alternatively we could figure out why your workspace uses so much memory in the first place? Can you share your workspace/repo?
I think either when python.analysis.nodeExecutable is set then considering reading env variable NODE_OPTIONS, or just offer an option like typescript.tsserver.maxTsServerMemory does is good enough.
And should also emphasize in #pylance-is-crashing that since vscode native node executable can use 4GB ram only, however if you choose to use custom node executable, you still can only use up to 8GB of ram.
you still can only use up to 8GB of ram.
This actually isn't true. The total ram per worker thread is limited to 8GB. We run 3 at the moment, so it can grow to 24GB. Whereas with the VS code node, the total space is 4GB. We run out of ram much faster with the VS code exe.
Can you share a repro for the situation where the 24GB is not enough?
This actually isn't true. The total ram per worker thread is limited to 8GB. We run 3 at the moment, so it can grow to 24GB. Whereas with the VS code node, the total space is 4GB. We run out of ram much faster with the VS code exe.
I can't see if the worker process actually has three threads? (or spawn 3 process?).
To me, if the workspace folder is opened at the same location with virtual env folder (this way, it is much more easily to locate the file if you press "go to definition", and vscode file explorer will jump to site-packages folder too) and with python.analysis.userFileIndexingLimit set to -1, the log show frequent heap gc and eventually crash.
Is the '3' documented anywhere in the repo? 4GB is too little, and 8GBx3 may be as well too large for some people. Thus I want to know if this can be set. (worker process count and memory limit).
Worker process count is a function of how our code works, it's not a configurable option. Memory limit is set by us at the moment when we start the node process (and it's not in open source code unfortunately).
However this only works if you use a different node process than the one that ships with VS code. It's using pointer compression which limits the amount of memory that's allowed to 4GB.
Ideally we wouldn't crash in either case. We'd rather find what's taking so much memory in your scenario and figure out a way to not use so much.
I saw there is some modification to pylance-is-crashing, so I am wondering if setting NODE_OPTIONS can affect the parameter passed to node executable by now.
I am wondering if setting NODE_OPTIONS can affect the parameter passed to node executable by now.
Sorry but it won't. We hardcode the limit at the moment to 8GB per thread.
To me, if the workspace folder is opened at the same location with virtual env folder
this shouldn't make us to analyze py files in the .venv. I think some settings are messed up. if you are doing that on purpose to make indexing to index all 3rd party packages, you should rather use packageIndexDepth option so you can use persisted indices rather than re-parsing all 3rd party packages everytime.
I am wondering if setting NODE_OPTIONS can affect the parameter passed to node executable by now.
Sorry but it won't. We hardcode the limit at the moment to 8GB per thread.
Then I don't think this does make sense.
For those using vscode-server remotely, you can increase the memory limit by setting the NODE_OPTIONS environment variable in your shell configuration.
On Linux or Mac, add export NODE_OPTIONS="--max-old-space-size=8192" to either your .xxx_profile or .xxxrc file. On Windows, add set NODE_OPTIONS=--max-old-space-size=8192 to your batch file to update your system environment variable, or open the System Properties window and add NODE_OPTIONS=--max-old-space-size=8192.
Since you are just fixing the size to 8192 anyways, does modifying NODE_OPTIONS make any changes to the startup behavior?
Since you are just fixing the size to 8192 anyways, does modifying NODE_OPTIONS make any changes to the startup behavior?
We only force the size to 8192 if you set the node executable. If you use the default VS code node, you can set the NODE_OPTIONS. It only happens to work in the remote case though.
NODE_OPTIONS will likely be one of the ways we'll allow the user to set the limit when we fix this issue though.
Seems like it should be
- NODE_OPTIONS are used if set
- some new setting, maybe
python.analysis.maxNodeMemory
Since you are just fixing the size to 8192 anyways, does modifying NODE_OPTIONS make any changes to the startup behavior?
We only force the size to 8192 if you set the node executable. If you use the default VS code node, you can set the NODE_OPTIONS. It only happens to work in the remote case though.
Well, I thought node server shipped with vscode-server can only do 4GB? or is that wrong? If the NODE_OPTIONS is intended for internal node server, then what could be its upper limit?
I test again with > 8GB of ram, since I recall it that it would crashed at about 6GB of ram usage after serveral GCs. (while the remote server has >100GB of free ram)
crash log
2024-05-08 02:23:55.623 [info] [Info - 2:23:55 AM] (411570) Heap stats: total_heap_size=5366MB, used_heap_size=4769MB, cross_worker_used_heap_size=4769MB, total_physical_size=5365MB, total_available_size=21564MB, heap_size_limit=26432MB
<--- Last few GCs --->
[411570:0x5f624000e60] 962223 ms: Scavenge 3778.5 (4270.7) -> 3767.6 (4273.2) MB, 9.6 / 0.0 ms (average mu = 0.979, current mu = 0.967) allocation failure; [411570:0x5f624000e60] 962279 ms: Scavenge 3783.3 (4274.4) -> 3772.4 (4276.9) MB, 9.2 / 0.0 ms (average mu = 0.979, current mu = 0.967) allocation failure; [411570:0x5f624000e60] 962337 ms: Scavenge 3787.9 (4277.7) -> 3776.8 (4277.9) MB, 11.7 / 0.0 ms (average mu = 0.979, current mu = 0.967) allocation failure;
<--- JS stacktrace --->
FATAL ERROR: MarkCompactCollector: young object promotion failed Allocation failed - JavaScript heap out of memory
2024-05-08 02:23:57.019 [info] 1: 0xb85bc0 node::Abort() [/home/
2024-05-08 02:23:57.019 [info] 2: 0xa94834 [/home/
2024-05-08 02:23:57.020 [info] 3: 0xd66d10 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/home/
2024-05-08 02:23:57.020 [info] 4: 0xd670b7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/home/
2024-05-08 02:23:57.021 [info] 5: 0xf447c5 [/home/
2024-05-08 02:23:57.021 [info] 6: 0xfa1c49 v8::internal::EvacuateNewSpaceVisitor::Visit(v8::internal::HeapObject, int) [/home/
2024-05-08 02:23:57.022 [info] 7: 0xfa344e v8::internal::FullEvacuator::RawEvacuatePage(v8::internal::MemoryChunk*, long*) [/home/
2024-05-08 02:23:57.023 [info] 8: 0xf6f8ef v8::internal::Evacuator::EvacuatePage(v8::internal::MemoryChunk*) [/home/
2024-05-08 02:23:57.023 [info] 9: 0xf6fc85 v8::internal::PageEvacuationJob::Run(v8::JobDelegate*) [/home/
2024-05-08 02:23:57.024 [info] 10: 0x1aef046 v8::platform::DefaultJobState::Join() [/home/
2024-05-08 02:23:57.025 [info] 11: 0x1aef0b3 v8::platform::DefaultJobHandle::Join() [/home/
2024-05-08 02:23:57.025 [info] 12: 0xf84377 unsigned long v8::internal::MarkCompactCollectorBase::CreateAndExecuteEvacuationTasks<v8::internal::FullEvacuator, v8::internal::MarkCompactCollector>(v8::internal::MarkCompactCollector*, std::vector<std::pair<v8::internal::ParallelWorkItem, v8::internal::MemoryChunk*>, std::allocator<std::pair<v8::internal::ParallelWorkItem, v8::internal::MemoryChunk*> > >, v8::internal::MigrationObserver*) [/home/
2024-05-08 02:23:57.026 [info] 13: 0xf848dc v8::internal::MarkCompactCollector::EvacuatePagesInParallel() [/home/
2024-05-08 02:23:57.027 [info] 14: 0xfa0365 v8::internal::MarkCompactCollector::Evacuate() [/home/
2024-05-08 02:23:57.027 [info] 15: 0xfa0e88 v8::internal::MarkCompactCollector::CollectGarbage() [/home/
2024-05-08 02:23:57.028 [info] 16: 0xf54601 v8::internal::Heap::MarkCompact() [/home/
2024-05-08 02:23:57.029 [info] 17: 0xf560d8 [/home/
2024-05-08 02:23:57.029 [info] 18: 0xf56a48 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/home/
2024-05-08 02:23:57.030 [info] 19: 0xf313ae v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/
2024-05-08 02:23:57.030 [info] 20: 0xf32777 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/
2024-05-08 02:23:57.031 [info] 21: 0xf1394a v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/home/
2024-05-08 02:23:57.031 [info] 22: 0x12d8e6d v8::internal::Runtime_AllocateInOldGeneration(int, unsigned long*, v8::internal::Isolate*) [/home/
That should work. How did you make sure the NODE_OPTIONS were set?
That should work. How did you make sure the NODE_OPTIONS were set?
The crash log (output from pyrightconfig.json with verboseOutput:true) tells you the heap_size_limit=, and it's equal to the --max-old-space-size= I set. (about 26G of ram, 1/4 of server free ram)
Hmm, maybe that doesn't work then. Maybe the vscode-server node is still using compacted memory somehow and has a limit on the amount of memory available.
@heejaechang, how did you find out that the vscode-server node was different than the desktop one?
This is how you can check.
if you run
node -p "process.config"
it will show flags the node is built with. and you need to look for v8_enable_pointer_compression to see whether node is built with pointer compression
for vscode, you need to set ELECTRON_RUN_AS_NODE=1 as your env, and then run code.exe (not code.cmd) with -p "process.config"
for vscode, you will see v8_enable_pointer_compression: 1
for vscode-server, you first need to find where vscode-server is installed on your remote. usually ~/.vscode-server. once you found, find node in that folder. once you found it, do the same. (but you won't need ELECTRON_RUN_AS_NODE since it is regular node not electron)
for vscode-server, you will see v8_enable_pointer_compression: 0 (at least, that's what it shows for me)
if you wonder, you can try yourself to verify.
Well, I thought node server shipped with vscode-server can only do 4GB? or is that wrong? If the NODE_OPTIONS is intended for internal node server, then what could be its upper limit?
there are 2 different concepts of memory limitation. one is hard threshold set by pointer compression and the other is soft threshold set by --max-old-space-size
by default, node (at least code and vscode-server) has 4GB as soft threshold. but you can increase it with the option up to hard threshold
for pointer-compression enabled node, that hard threshold is 4GB. for regular 64 bits node, upper limit is much higher, but I believe it depends on node version. but that said, as long as I know, node doesn't recommend setting it bigger than needed since it will affect GC performance. bigger the limit is slower the GC will be.
Heap stats: total_heap_size=5366MB, used_heap_size=4769MB, cross_worker_used_heap_size=4769MB, total_physical_size=5365MB, total_available_size=21564MB, heap_size_limit=26432MB
this indicates you have plenty of memory. so it might be other kind of issue due to how JS GC works.
it would be nice if we can repro it ourselves. is there any way we could get some repro projects?
From recent test, I think 5.1 is much more better and stable than 4.0. In 4.0, it random crashes if I open the folder along with a venv in subdirectory. I forgot if I set the python.analysis.diagnosticMode to workspacefile or openfiles only.
This time I just include the venv path and system site-package path in extra path and turn on workspacefile (where my workspace contains some git clone ML library and my own stuff). Intended to stress test by asking pylance to do a full scan.
Anyways,
Since local code.exe node server has hard limit at 4GB, while specifying node executable also has hard limit at 8GB. I think there could have some demand that user can adjust their memory usage themselves.
As for remote vscode-server, if setting --max-old-space-size can adjust the node server (no hard limit) shipped by vscode, then I am fine with it. Not going to mess with nodeExecutable again.