odbc
odbc copied to clipboard
Avoid checking existence of table in call to dbAppendTable()
Issue Description and Expected Result
dbExistsTable()
is brittle, success or failure to write data to a table should be reported by the database. Related: #465.
As part of this change, dbWriteTable()
should be calling dbAppendTable()
, not the other way round.
Database
All.
Reproducible Example
N/A. Relevant code: https://github.com/r-dbi/odbc/blob/main/R/Table.R#L134 .
Greetings -- my company uses DBI/ODBC for an internal tool and I believe the function dbExistsTable
is incorrectly throwing an error on our end when we call dbAppendTable
. When I strip out all our custom code so it's only using DBI's functions, I can fetch data from a table but not write to it.
Is the plan to remove the call to dbExistsTable
in the definition of dbAppendTable
? If so, I think that would solve our problem; if not, should I create a new issue? I'd be happy to provide more details if it would help.
Thanks. That's the plan, we don't have a timeline though.
For now, please work around with dbWriteTable(append = TRUE)
.
Good to know, thank you for the info!
For what it's worth, I get the same error when using dbWriteTable(append = TRUE)
. The only way I've managed to get it working is by overriding the definitions of 2 functions in the library (details below).
I imagine this isn't an ideal fix since I'm completely bypassing some checks that probably exist for good reason, but it's a working proof of concept!
-
Comment out this in
dbAppendTable
, which you mentioned above. -
...and this in
odbc_write_table
, which is attempting to create a table that already exists becausedbExistsTable
incorrectly returnsFALSE
.
Hi there, Arthur.
dbExistsTable is indeed a special snowflake, but usually can be massaged into working. Having said that, and i havent thought about this too much, but rather than commenting out lines, you should be able to extend the S4 methods to give you the behavior you are looking for specific to your back-end. Even if it's a hack like always return true.
Which database are you having issues with?
Thank you for chiming in. Appropriately enough, we are using Snowflake. Any suggestions would be greatly appreciated!
I've been having issues saving data to Snowflake. I raised a question on Stackoverflow before bumping into this thread. I'm wondering if the issues I'm experiencing are due to what's being discussed here?
We had succes by using DBI::Id
to replace the name argument of dbWriteTable. It avoids scanning everywhere for not much benefits.
tableId <- function(conn, nm) {
DBI::Id(
catalog = DBI::dbGetQuery(conn, "SELECT CURRENT_DATABASE()")[[1]],
schema = DBI::dbGetQuery(conn, "SELECT CURRENT_SCHEMA()")[[1]],
table = nm
)
}
dbWriteTable(conn, tableId(conn, "mtcars"), mtcars)
Thanks for the suggestion @meztez - super helpful. To quantify with an example, on Snowflake, without qualifying the name:
conn <- DBI::dbConnect(
odbc::odbc(),
.connection_string = <connection_string>
)
odbc::dbExecute(conn, "USE DATABASE <MY_DATABASE>")
odbc::dbExecute(conn, "USE SCHEMA PUBLIC")
bench::mark(
odbc::dbWriteTable(conn, "mtcars", mtcars, temporary = TRUE, overwrite = TRUE)
)
Initial results:
# A tibble: 1 × 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time result memory time gc
<bch:expr> <bch> <bch:> <dbl> <bch:byt> <dbl> <int> <dbl> <bch:tm> <list> <list> <list> <list>
1 "odbc::dbWriteTable(conn, \"mtca… 7.28m 7.28m 0.00229 3.89MB 0 1 0 7.28m <lgl> <Rprofmem> <bench_tm> <tibble>
After qualification, this was ~2s!