SSL error: decryption failed or bad record mac
not sure if anyone else has faced this error?
I'm using python3-psycopg2 version 2.8.6-2
root : INFO Part 3 of 6 : Start raw admin boundary load : 2021-11-14 03:35:11.941076
root : INFO - Step 1 of 3 : raw admin boundaries loaded : 0:06:19.379195
root : INFO - 15 duplicates removed from raw_admin_bdys_202111.aus_mb_category_class_aut
root : INFO - 7 duplicates removed from raw_admin_bdys_202111.aus_remoteness_category_aut
root : INFO - authority tables deduplicated
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.9/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/local/gnaf-loader/geoscape.py", line 45, in run_sql_multiprocessing
pg_cur.execute("SET search_path = {0}, public, pg_catalog".format(settings.raw_gnaf_schema,))
psycopg2.OperationalError: SSL error: decryption failed or bad record mac
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/gnaf-loader/load-gnaf.py", line 1011, in <module>
if main():
File "/usr/local/gnaf-loader/load-gnaf.py", line 120, in main
prep_admin_bdys(pg_cur)
File "/usr/local/gnaf-loader/load-gnaf.py", line 549, in prep_admin_bdys
geoscape.multiprocess_list("sql", sql_list, logger)
File "/usr/local/gnaf-loader/geoscape.py", line 27, in multiprocess_list
result_list = list(results)
File "/usr/lib/python3.9/multiprocessing/pool.py", line 870, in next
raise value
psycopg2.OperationalError: SSL error: decryption failed or bad record mac
It's pointing towards being a multiprocessing issue related to the use of a Postgres connection pool. More investigation required as all connections should be running independantly in each process.
Hey Hugh - have the exact same issue as Andrew though am using psycopg2-binary 2.9.2 on Ubuntu 18.04 / Python 3.6.
Also encountering it when running with --max_processes=1
Am happy to help testing / root causing it, cheers!
I've reverted the Postgres connect pool used in multiprocessing in favour of "standard" PG connections per process. There's evidence that Psycopg connection pools aren't multiprocess-safe. There's only a small impact to performance anyway.
Code changes are currenly in the 202111 branch if you want to test (will work with 202108 data, but not 202111 data yet). WARNING: I've haven't done a full run with the code changes yet!
I tried running the 202111 branch against 202108 data but this after a lot of constraint violation messages. I'll just wait Nov 21 to be ready.
root : INFO - Step 7 of 14 : addresses populated : 0:01:29.125589
root : INFO - Step 8 of 14 : principal alias lookup populated : 0:00:04.159270
root : INFO - Step 9 of 14 : primary secondary lookup populated : 0:00:13.173880
Traceback (most recent call last):
File "/usr/local/gnaf-loader/load-gnaf.py", line 1010, in <module>
if main():
File "/usr/local/gnaf-loader/load-gnaf.py", line 128, in main
create_reference_tables(pg_cur)
File "/usr/local/gnaf-loader/load-gnaf.py", line 642, in create_reference_tables
pg_cur.execute(geoscape.open_sql_file("03-10-reference-split-melbourne.sql"))
psycopg2.errors.UndefinedTable: relation "admin_bdys_202111.locality_bdys" does not exist
LINE 14: FROM admin_bdys_202111.locality_bdys AS bdy
The new release is done and in master. Please test.