markhooper99
markhooper99
I'm an Oracle guy but a total Postgres noob, but I've read a few things today regarding how to configure a Postgres table for high-volume inserts. For the table in...
The server is a VM. The VM has been configured with 12 CPUs (Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz) and 32 GB of RAM DATA_LIMIT is set to 30000
Thanks for the info. I've reconfigured my ora2pg config file with your suggestions and am trying an --oracle_speed run now.
A few observations - I let a real run go (with JOBS 3 and commented the ORACLE_COPIES and the DEFINED_PK values) for a while until there was clear evidence that...
Trying an --ora2pg_speed run ---- **_apologies in advance for the incoming wall of text that follows_** One minute in... [> ] 285000/351152224 total rows (0.1%) - (56 sec., avg: 5089...
Here's my ora2pg 'config' settings ... ALLOW SEIS_POINT AUTODETECT_SPATIAL_TYPE 0 AUTONOMOUS_TRANSACTION 1 BITMAP_AS_GIN 1 BLOB_LIMIT 200 BZIP2 COMMENT_COMMIT_ROLLBACK 0 COMMENT_SAVEPOINT 0 COMPILE_SCHEMA 0 CONTEXT_AS_TRGM 0 CONVERT_SRID 1 COPY_FREEZE 0 COST_UNIT_VALUE...
**_Again, apologies for such a massive post but I'm trying to provide as much detail as possible._** I adjusted one more thing on the Oracle side - the table in...
Thanks for the response. Do you know of any way that I could force the RowCacheSize in DBD::Oracle somehow? EDIT... actually, never mind - it appears that this is set...
Trying DBD::Oracle 1.8 - had 1.76 installed previously
With JOBS=3 and ORACLE_COPIES=4 Something that makes this exceptionally puzzling is that the run work fantastic until the number of records written is just over 50 million, then the 'multiple...