ogr2ogr is rounding the data if the column exist as Numeric Data type.
Expected behavior and actual behavior.
During Data import using ogr2ogr if the column exist in the database, ogr2ogr is should not round the numbers!
Actual behavior the data are being rounded to 10 decimals when the column exist in the database as numeric without precision.
Steps to reproduce the problem.
For example : My File https://drive.google.com/file/d/1IJRs66Lut7sb-eaClNze71qSP6eyxOb6/view?usp=sharing
My schema,
Step 1
CREATE TABLE test ( ogc_fid serial PRIMARY KEY, "value 1" numeric, value2 numeric, value3 text );
Step 2
ogr2ogr -f "PostgreSQL" PG:"dbname=testapp " inputFile.csv -nln test.
or
ogr2ogr -f "PostgreSQL" PG:"dbname=testapp " -lco PRECISION=YES -oo AUTODETECT_WIDTH=YES -oo AUTODETECT_TYPE=YES -oo AUTODETECT_SIZE_LIMIT=0 -oo QUOTED_FIELDS_AS_STRING=YES -oo MAX_LINE_SIZE=-1 inputFile.csv -nln test
Result
using this command to import the data into the database. the result in the database is not thesame with the csv input. INPUT 3607.3299999999999. DATABASE value 3607.33 this happens when i set the schema to numeric on the database level before import.
if i do the import with this command ogr2ogr -f "PostgreSQL" PG:"dbname=testapp " inputFile.csv -nln test and the schema is not set i see the exact result in csv and database.
NOTE: if the column does not exists, ogr2ogr is able to upload the column right but, if the column exist, ogr2ogr is rounding the numbers!
Operating system
Mac OS
GDAL version and provenance
GDAL 3.8.3, released 2024/01/04
GDAL has no native support for NUMERIC data type (that is with arbitrary precision), so what you observe is expected. No easy fix for that. This would require introducing a new data type to OGR. Lot of work required
Why happens? because if the column does not exists the data is right, I tested this case.
The only way to upload would be by insert statement, but if the column does not exists, there is the right values, this is like a special case the ogr2ogr starts rounding values. Usually a csv file, when is read already has the types, which can be inserted by the insert statement.
Because you created the database as
CREATE TABLE test (
ogc_fid serial PRIMARY KEY,
"total value" numeric,
total_limit numeric,
tiv_range text
);
Numeric is not supported in the GDAL data model and therefore it is using float, double etc when it creates the table from the scratch. For your number I believe that GDAL would create the field as "double precision".
If you create the tables by using datatypes that both PostgreSQL and GDAL understand in the same they there should be no problem https://www.postgresql.org/docs/current/datatype-numeric.html I do not know if "numeric" is the only not-supported type or if there are others.