nvc
nvc copied to clipboard
fatal: out of memory attempting to allocate 65536 byte object
I get the mentioned error several times before it exits in a longer testbench. I didn't see any increase in the memory used by the process. Do you have a suggestion for debugging it or do you have a clue where the issue is in the NVC code?
It could be that the simulation just needs more than the default 16 MB heap size? You can try increasing that with -H 32m. If it's consistently running out of memory even with a larger heap please try setting export NVC_GC_VERBOSE=1 before running and then post the ** Debug: lines here.
It didn't help, here is the log (though with 16m heap) it seems it is an issue with UVVM
note: GC: allocated 15775200/16777216; fragmentation 1.9% [0 us]
note: GC: allocated 15831872/16777216; fragmentation 2.1% [0 us]
note: GC: allocated 15831872/16777216; fragmentation 2.1% [0 us]
C:\proj\libraries\uvvm\uvvm_util\src\generic_queue_pkg.vhd:1092:3: fatal: out of memory attempting to allocate 18504 byte object
*** Caught exception c0000005 (EXCEPTION_ACCESS_VIOLATION) [address=0000000000000000, ip=00007FFB03DC871A] ***
[00007FF687E16834]
[00007FFB8B850057] UnhandledExceptionFilter+0x1e7
[00007FFB8B06F0B8]
[00007FFB8B06EEA1]
[00007FFB8DDF53B0] memset+0x13b0
[00007FFB8DDDC766] _C_specific_handler+0x96
[00007FFB8DDF229F] _chkstk+0x11f
[00007FFB8DDA1454] RtlRaiseException+0x434
[00007FFB8DDF0DCE] KiUserExceptionDispatcher+0x2e
[00007FFB03DC871A] MY_VVC.TD_RESULT_QUEUE_PKG.T_GENERIC_QUEUE.FETCH(I39UVVM_UTIL.TYPES_PKG.T_IDENTIFIER_OPTION8POSITIVE)17T_GENERIC_ELEMENT+0x13a [VHDL]
[00007FFB03D81390] MY_VVC.TD_VVC_ENTITY_SUPPORT_PKG.INTERPRETER_FETCH_RESULT(55MY_VVC.TD_RESULT_QUEUE_PKG.T_GENERIC_QUEUE48MY_VVC.VVC_CMD_PKG.T_VVC_CMD_RECORD48MY_VVC.VVC_METHODS_PKG.T_VVC_CONFIG58MY_VVC.TD_VVC_ENTITY_SUPPORT_PKG.T_VVC_LABELS7NATURAL46MY_VVC.VVC_CMD_PKG.T_VVC_RESPONSE)+0x860 [VHDL]
[00007FFB03D64559] WORK.TB.I_MY_VVC.P_CMD_INTERPRETER+0x8e9 [VHDL]
[00007FF687EA9F1B] nvc_current_delta+0xb9b
[00007FF687E9DCCB]
[00007FF687E9FFCE]
[00007FF687EAA9E5] nvc_current_delta+0x1665
[00007FF687EA4355] test_net_event+0xb15
[00007FF687E12898]
[00007FF687E12B86]
[00007FF687EF0CA2] vhpi_put_data+0x22a22
[00007FF687E113AE]
[00007FF687E114E6]
The crash after running out of memory is a regression caused by some changes a few days ago, it should be fixed now.
Does it help if you use an even bigger value like -H 256m? I'm just trying to work out if something is leaking memory or not.
Yes, a value of 64m is enough to make it pass actually.
End of 32m log:
note: GC: allocated 29160352/33554432; fragmentation 0.85% [0 us]
note: GC: allocated 29160352/33554432; fragmentation 0.85% [0 us]
fatal: out of memory attempting to allocate 65536 byte object
note: GC: 100 collection cycles; 0 us total; -nan(ind)% of overall run time
I found another out of memory issue that I could minimize. I can make it pass if I set the heap to 1024m, but it seems excessive.
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
library uvvm_util;
context uvvm_util.uvvm_util_context;
entity test is
end entity test;
architecture beh of test is
begin
process
variable v_rand : t_rand;
variable v_int : integer;
begin
for i in 1 to 10 loop
v_int := v_rand.rand(0, 2 ** 29, CYCLIC);
end loop;
for i in 1 to 10 loop
v_int := v_rand.rand(0, 2 ** 30, CYCLIC);
end loop;
for i in 1 to 10 loop
v_int := v_rand.rand(0, integer'right, CYCLIC);
end loop;
for i in 1 to 10 loop
v_int := v_rand.rand(integer'left, integer'right, CYCLIC);
end loop;
wait;
end process;
end architecture beh;
Regarding the initial issue, I believe I found that I was adding packets to one of the UVVM generic queues for the VVCs, but not removing the received once, so it would be natural that the heap was growing.
Using the default 16 MB heap size the out-of-memory error is triggered by line 2419 in UVVM's rand_pkg.vhd:
priv_cyclic_list := new t_cyclic_list(min_value to max_value);
With your test min_value is zero and max_value is 536870912. t_cyclic_list is an array of std_logic which takes up one byte in NVC so the whole allocation needs 512 MB. If I run with -H 513m the test passes so I don't think there's anything wrong here, although if possible checking that the memory usage is similar in other simulators would be interesting.
The reason NVC raises an error for this is that new doesn't call the system malloc, but instead uses its own internal garbage collected heap which currently has a fixed size specified by the -H argument. (This means deallocate is basically a no-op, it just sets the access argument to null.) If the GC fails to recover enough memory to satisfy an allocation it throws this error.
I'm open to suggestions on how to improve this, but one advantage of the current behaviour is that it makes it easy to identify memory leaks or unexpectedly large allocations without the simulation taking up all the memory on your machine. A potential improvement might be to grow the heap incrementally after every GC up to some fixed fraction of the total RAM (this is what Java does for example).
Ah, I see now. I'll see if I can do some comparisons on simulators, maybe some are doing sparse allocation.
I agree this can be useful for detection of memory leaks or unintended allocations. Incremental increases with warnings starting from the specified limit sounds like a useable idea, where you would still get the information on leaks and bad allocations.
Does setting the value pre-allocate the system memory, or is it only a fallover limit?
It just reserves the memory and will only allocate when those pages get touched. But the GC only kicks in once the limit is reached, so if you run with -H 1024m the memory usage will keep growing until it hits 1GB and then stay there.
I was testing another TB that allocates a lot on the heap and nvc recommended I increase to -H2048m, but that does not seem to work with the allocator.
** Fatal: VirtualAlloc: The parameter is incorrect.
There was a bug with handling heap sizes >= 2GB. I've fixed that now, could you try again? I tested with a 6GB heap on a Windows VM without issue.
Thanks, that works, tried with 20GB.