pgloader icon indicating copy to clipboard operation
pgloader copied to clipboard

Heap exhausted during garbage collection

Open AnrDaemon opened this issue 5 years ago • 44 comments

While trying to run a full new import using fresh pgloader container, I stumbled upon

2019-05-07T11:56:36.063000Z DATA < #(" 0" "1e73536d-364a-4ac5-9556-26080ae675be"
    "a2159ea6-dcd4-4bfb-a659-8a30d17471a3" " 7" "000" "0" " 0" "008" "" " 2"
    "2014-06-03" "НСТ Весна-6" "5074" "5074"
    "c24a92ab-0163-469a-a56a-081cf40ddbe4" "НСТ Весна-6" "46436000000"
    "46736000" "21" "5a11404e-3a4f-4943-b60c-51480729dcfb" "000"
    "500000080000260" "142180" "02bf0bee-3882-4364-8ad1-dc004a0ef2be" "50"
    "тер" "2014-02-28" "0260" "" "" "2016-08-09" "000" "0000" "000" " 0"
    "3e5ba4cb-7b5e-4258-86e0-8b91ca39d601" "0000" "" "0")
2019-05-07T11:56:36.065000Z DATA < #(" 0" "2dbae092-4d28-4692-bfd7-509c92eb6896"
    "30e0f893-20d3-4c65-b476-7db50ad93492" " 7" "000" "0" " 0" "008" "" " 2"
    "2014-06-03" "Овальная" "5074" "5074"
    "5e1ee261-ee43-4004-8c93-2f5f33d3c864" "Овальная" "46436000000" "46736000"
    "21" "5a11404e-3a4f-4943-b60c-51480729dcfb" "000" "500000080000051"
    "142180" "2e68e5bf-0f7b-4bab-aac9-e99e57445dfc" "50" "ул" "2014-02-28"
    "0051" "" "" "2016-04-14" "000" "0000" "000" " 0"
    "3e5ba4cb-7b5e-4258-86e0-8b91ca39d601" "0000" "" "0")
Heap exhausted during garbage collection: 288 bytes available, 432 requested.
 Gen StaPg UbSta LaSta LUbSt Boxed Unboxed LB   LUB  !move  Alloc  Waste   Trig    WP  GCs Mem-age
   0:     0     0     0     0     0     0     0     0     0        0     0 10737418    0   0  0.0000
   1:     0     0     0     0     0     0     0     0     0        0     0 10737418    0   0  0.0000
   2: 28005 28307     0     0   855  9423     0     0    12 334403376 2386128 10737418    0   0  0.5853
   3: 10322 32767     0     0   793  7934     8     0     0 284273120 1955360 10737418    0   0  0.0000
   4:     0     0     0     0     0     0     0     0     0        0     0 10737418    0   0  0.0000
   5:     0     0     0     0  1027  9584    38     0    50 345980496 2965936 356717914  769   1  0.0000
   6:     0     0     0     0  1886  1220     0     0     0 101777408     0  2000000 1787   0  0.0000
   Total bytes allocated    = 1066434400
   Dynamic-space-size bytes = 1073741824
GC control variables:
   *GC-INHIBIT* = true
   *GC-PENDING* = false
   *STOP-FOR-GC-PENDING* = false
fatal error encountered in SBCL pid 133(tid 140627020216064):
Heap exhausted, game over.

Welcome to LDB, a low-level debugger for the Lisp runtime environment.
ldb> 

I can assert that nothing has significantly changed on my end, this is the same exact unaltered files I've imported last time, and the only change was in loader script, and it was minimal:

-        SET search_path to 'fias'
+        SET search_path TO 'fias

AnrDaemon avatar May 07 '19 12:05 AnrDaemon

You can try to play with batch size options, and in particular reduce the prefetch rows setting, which defaults to 100000 rows (see https://pgloader.readthedocs.io/en/latest/pgloader.html#with).

dimitri avatar May 07 '19 21:05 dimitri

But what would change the issue, if the same database were loaded just fine previously? Any server changes that may affect it?

AnrDaemon avatar May 07 '19 22:05 AnrDaemon

Also, how to detect optimal setting? What should I look at?

AnrDaemon avatar May 08 '19 09:05 AnrDaemon

The problem is with SBCL Garbage Collector, and I don't have proper answers for how to avoid it in general. I guess at some point we should get back to speaking with SBCL maintainers and see about improving the GC there. It sounds too much of a task for me to handle though, unfortunately.

dimitri avatar May 08 '19 11:05 dimitri

Sad story. It placated at 25k records for now, but given your explanation, I'm compelled to make it configurable.

AnrDaemon avatar May 08 '19 13:05 AnrDaemon

What heap size was SBCL started with? Does it help if you start SBCL with a bigger heap?

phoe avatar May 08 '19 22:05 phoe

How can I know? I'm using provided Docker container, and not setting any extra options. However, the error text indicates a bug in GC, to me.

AnrDaemon avatar May 09 '19 08:05 AnrDaemon

I see.

It is not necessarily a GC bug per se - it is possible that pgloader requires a bigger heap to function properly with this database. The provided Docker container needs to be modified to take this into account. Normally this is achieved through running SBCL with a --dynamic-space-size flag and specifying the new heap size. I do not know how to pass this flag into the container as I'm not a container person.

phoe avatar May 09 '19 09:05 phoe

@dimitri Is there any kind of flag for the container that allows specifying the SBCL heap size? If not, there should be one.

phoe avatar May 09 '19 09:05 phoe

I'm not sure if it's possible to set that to a saved image at run-time, my understanding was that it needs to happen when cooking the image somehow?

dimitri avatar May 10 '19 14:05 dimitri

From #sbcl on Freenode:

17:00 < phoe> I have a SAVE-LISP-AND-DIE'd SBCL image that executes an application. 
              Can I pass a --dynamic-size argument to that binary like I would to a 
              normal SBCL image?
17:03 < phoe> Or is it the case of :SAVE-RUNTIME-OPTIONS argument passed to S-L-A-D?
17:05 < jackdaniel> save-runtime-options makes the executable inherit dynamic size, 
                    if save-runtime-options is nil, then executable will process 
                    --dynamic-space-size argument and set appropriate values

Via the SBCL manual:

:save-runtime-options If true, values of runtime options –dynamic-space-size and –control-stack-size that were used to start sbcl are stored in the standalone executable, and restored when the executable is run. This also inhibits normal runtime option processing, causing all command line arguments to be passed to the toplevel. Meaningless if :executable is nil.

phoe avatar May 10 '19 15:05 phoe

Nice! Do you want to try to hack ./src/save.lisp so that we can pass a dynamic space size either when building the docker container, or maybe to the resulting pgloader image? I guess that with the current version of things the --dynamic-space-size might work already, it's just not documented, right?

dimitri avatar May 10 '19 17:05 dimitri

I guess that with the current version of things the --dynamic-space-size might work already, it's just not documented, right?

No, currently :SAVE-RUNTIME-OPTIONS is set to T:

https://github.com/dimitri/pgloader/blob/954eca02d0d27f6ce71be48d01ae4987029bca11/src/save.lisp#L83

This means that the heap size cannot be modified from outside if I understand it correctly.

So far, it seems like we either have a choice of being able to pass --dynamic-space-size or normal pgloader argv, but not both at the same time; one option might be starting SBCL with a predetermined heap size in the build script, so the executable is built with a bigger heap by default. Where in the buildscript is the sbcl binary invoked?

I'm also exploring other options on #sbcl on Freenode.

phoe avatar May 10 '19 17:05 phoe

Have a look at https://github.com/dimitri/pgloader/blob/master/Makefile#L9 for how to define the dynamic space size in the build process for pgloader. Currently, this only works for the legacy build system that relies on buildapp. I intend to switch to the new build system at src/save.lisp sometime, and it's now used in both Travis and Docker builds to make sure it's reliable.

It should be fairly easy to add --dynamic-space-size $(DYNSIZE) to SBCL builds when using the make save target?

dimitri avatar May 10 '19 20:05 dimitri

Yes, it doesn't seem hard to do that.

Nonetheless, you cannot use save.lisp for setting the heap size. It is a Lisp file, so it is invoked once SBCL is already started, so the heap size is already set in stone.

phoe avatar May 10 '19 21:05 phoe

I just ran into this issue this past weekend. I am using one linux server that has a load file that contains with 2 RDS instances:

LOAD DATABASE FROM mysql://confuser:[email protected]/confluence INTO pgsql://confuser:PASSWORD@ac16f4chba0zdi6.cq1bwppz1j9k.us-east-1.rds.amazonaws.com/confluence

   CAST type bigint when (= 20 precision) to bigint drop typemod

   ALTER schema 'confluence' rename to 'public'; 

Here is my version: pgloader version "3.6.1" compiled with SBCL 1.3.1.debian

I was reading that i could possibly increase the --dynamic-space-size but once I am in the sbcl terminal i am unsure the syntax on how to increase it. Does anyone know?

mdrozdz17 avatar Jun 17 '19 13:06 mdrozdz17

I was reading that i could possibly increase the --dynamic-space-size but once I am in the sbcl terminal i am unsure the syntax on how to increase it. Does anyone know?

Try the syntax:

$ sbcl --dynamic-space-size 8192

Where 8192 is your preferred heap size.

phoe avatar Jun 17 '19 14:06 phoe

Looks like 8192 is out of range but i tried 3500 and that brought me to the terminal. Do you know if there is a way to verify this now?

I'll also try the migration to see if it gets me past this error for now to see if that helps.

mdrozdz17 avatar Jun 17 '19 15:06 mdrozdz17

I just tried it and it looks like i'm still running into the issue

Heap exhausted during garbage collection: 32768 bytes available, 115984 requested. Gen StaPg UbSta LaSta LUbSt Boxed Unboxed LB LUB !move Alloc Waste Trig WP GCs Mem-age 0: 0 0 0 0 0 0 0 0 0 0 0 85899345 0 0 0.0000 1: 0 0 0 0 0 0 0 0 0 0 0 85899345 0 0 0.0000 2: 42723 248946 0 0 1980 133485 0 47815 55 5150103072 855615968 85899345 0 0 1.5745 3: 5770 27142 0 0 1342 11646 1900 0 154 484057216 3792768 2000000 2474 0 0.4397 4: 0 0 0 0 0 0 0 0 0 0 0 2000000 0 0 0.0000 5: 0 0 0 0 0 0 0 0 0 0 0 2000000 0 0 0.0000 6: 0 0 0 0 3826 1329 0 0 0 168919040 0 2000000 3622 0 0.0000 Total bytes allocated = 7421137312 Dynamic-space-size bytes = 8589934592 GC control variables: GC-INHIBIT = true GC-PENDING = false STOP-FOR-GC-PENDING = false fatal error encountered in SBCL pid 1850(tid 140736755595008): Heap exhausted, game over.

mdrozdz17 avatar Jun 17 '19 15:06 mdrozdz17

I was able to get past this error. Basically at first i was specifying DYNSIZE=8192, and this got me the error above. As a test i upped the DYNSIZE=20000 and it got me past my error.

mdrozdz17 avatar Jun 17 '19 19:06 mdrozdz17

You can also try to limit the amount of memory that's used by pgloader by tweaking the batch size parameters, starting with prefetch rows = 10000 (it defaults to 100 000 which is a lot of rows if you have a table with either lots of columns or maybe very large values). See https://pgloader.readthedocs.io/en/latest/pgloader.html#batch-behaviour-options.

dimitri avatar Jun 17 '19 20:06 dimitri

I was able to get past this error. Basically at first i was specifying DYNSIZE=8192, and this got me the error above. As a test i upped the DYNSIZE=20000 and it got me past my error.

Can you please let me know how that can be done from the command line. I am facing same issue while migrating my confluence database to postgresql.

appy2401 avatar Sep 06 '19 06:09 appy2401

hi @appy2401 if you are building from source, you can run something like

make DYNSIZE=20000 pgloader

which will build a binary that has the increased heap size.

if you want to tweak the prefetch rows parameter, you can build a command file that looks like this:

load database
	from <source data>
        into <destination postgres>
with prefetch rows = 10000;

LincolnBryant avatar Sep 10 '19 21:09 LincolnBryant

Sorry, I don't understand how this is usable with huge database imports when the garbage collector barfs out. Given that I don't want to deal with the esoteric toolchain, what's the best option, try the python version 2?

In addition, pgloader --with "prefetch rows = 1000000" ./db.sqlite3 postgresql:///foo barfs out with an unreadable error

#<CLOSURE (LAMBDA (COMMAND-LINE-ARGUMENTS::VALUE)
            :IN
            COMMAND-LINE-ARGUMENTS::MAKE-OPTION-ACTION) {10061971BB}> fell through ETYPECASE expression.
Wanted one of (NULL KEYWORD SYMBOL CONS).

hasufell avatar Oct 31 '19 08:10 hasufell

The latter is a type-error and is a bug in logic, and this warrants a separate ticket. We would need a backtrace to debug it properly.

@dimitri How does one produce a Lisp backtrace with pgloader?

phoe avatar Oct 31 '19 08:10 phoe

prefetch rows = 10000 worked for me.

cgricb avatar Feb 03 '20 15:02 cgricb

Using prefetch rows, I could load more data, but the problem continues.

bruno-lopes avatar Mar 24 '20 13:03 bruno-lopes

@dimitri How does one produce a Lisp backtrace with pgloader?

Using --debug triggers a condition handling that uses trivial-backtrace and provides some level of information. The code being multi-threaded (using lparallel) sometimes makes those stack traces not as useful as one would like to, though I think I got it in a good shape after some tweaking.

dimitri avatar Mar 27 '20 22:03 dimitri

Problem is persisting even with using smaller prefetch rows = 1000.

dastanko avatar May 21 '20 07:05 dastanko

Hi, I also faced this failure. It occurred when loading with 4 workers. Reducing workers and using only 2 helped.

Cheers, G.

itworx4biz avatar Jun 17 '20 21:06 itworx4biz