doobie
doobie copied to clipboard
H2 and MySQL report spurious errors in query analyzer.
H2 and MySql (likely other database engines as well) report incorrect warnings/errors in the query analyzer. This seems to be because JDBC is being provided the incorrect information from the database engine, or maybe the JDBC drivers themselves are bad. In any case, it does not seem to be a problem with doobie, but a problem with JDBC or the DB engine.
MySQL Problems
Prepared statement params reported as VARCHAR
All prepared statement parameters are inferred as VARCHAR
. You must use checkOutput
instead of check
N.B.: Because of this you can not properly test insert/update statements at all, using the query analyzer.
Functions casting columns to nullable
Mysql function calls will always report the result as being nullable. This means that calling MIN
/MAX
/Foo
on a NOT NULL
column in a select statement, the query analyzer will see the function result as NULL
able, even when it is not.
H2 Problems
Some types reported as BINARY
Some types are reported as being binary blobs (demanding an Array[Char]
based Meta
instance)
Known types with this issue:
- UUID
Example:
✕ C01 ID BINARY (UUID) NOT NULL → UUID
- BINARY (UUID) is not coercible to UUID according to the JDBC specification or
any defined mapping. Fix this by changing the schema type to OTHER or
JAVAOBJECT, or the Scala type to Array[Byte].
VARCHAR
column type reported for all writes
All insert, update and modify queries report VARCHAR
types for all of the columns being inserted into/modified. The prepared parameter types are inferred correctly, but the columns in sql are reported incorrectly.
Example:
MERGE INTO fail(id, time, bool)
KEY (id)
VALUES (?, ?, ?)
✓ SQL Compiles and Typechecks
✕ P01 UUID → VARCHAR (VARCHAR)
- UUID is not coercible to VARCHAR (VARCHAR) according to the JDBC specification.
Fix this by changing the schema type to OTHER, or the Scala type to String.
✕ P02 Instant → VARCHAR (VARCHAR)
- Instant is not coercible to VARCHAR (VARCHAR) according to the JDBC
specification. Fix this by changing the schema type to TIMESTAMP, or the Scala
type to String.
✕ P03 Boolean → VARCHAR (VARCHAR)
- Boolean is not coercible to VARCHAR (VARCHAR) according to the JDBC
specification. Fix this by changing the schema type to BIT, or the Scala type to
String.
Old discussion/issue topic.
Continuing the discussion from gitter:
// using ammonite
import $ivy.`org.tpolecat::doobie-h2-cats:0.4.1`
import $ivy.`org.tpolecat::doobie-core-cats:0.4.1`
import doobie.imports._
import doobie.h2.imports._
val transactor = DriverManagerTransactor[IOLite](
"org.h2.Driver", "jdbc:h2:./test:test", "", ""
)
import transactor.yolo._
fr"""
CREATE TABLE IF NOT EXISTS fail(
id UUID PRIMARY KEY,
time TIMESTAMP NOT NULL,
bool BOOLEAN NOT NULL DEFAULT FALSE
)
""".update.quick.unsafePerformIO
sql"select * from fail".query[(java.util.UUID, java.time.Instant, Boolean)].check.unsafePerformIO
/*
select * from fail
✓ SQL Compiles and Typechecks
✕ C01 ID BINARY (UUID) NOT NULL → UUID
- BINARY (UUID) is not coercible to UUID according to the JDBC specification or
any defined mapping. Fix this by changing the schema type to OTHER or
JAVAOBJECT, or the Scala type to Array[Byte].
✓ C02 TIME TIMESTAMP (TIMESTAMP) NOT NULL → Instant
✓ C03 BOOL BOOLEAN (BOOLEAN) NOT NULL → Boolean
*/
To add on to this, a MERGE INTO
query exhibits strange behavior
sql"""
MERGE INTO fail(id, time, bool)
KEY (id)
VALUES ($uuid, $t, $b)
""".update.check.unsafePerformIO
/*
MERGE INTO fail(id, time, bool)
KEY (id)
VALUES (?, ?, ?)
✓ SQL Compiles and Typechecks
✕ P01 UUID → VARCHAR (VARCHAR)
- UUID is not coercible to VARCHAR (VARCHAR) according to the JDBC specification.
Fix this by changing the schema type to OTHER, or the Scala type to String.
✕ P02 Instant → VARCHAR (VARCHAR)
- Instant is not coercible to VARCHAR (VARCHAR) according to the JDBC
specification. Fix this by changing the schema type to TIMESTAMP, or the Scala
type to String.
✕ P03 Boolean → VARCHAR (VARCHAR)
- Boolean is not coercible to VARCHAR (VARCHAR) according to the JDBC
specification. Fix this by changing the schema type to BIT, or the Scala type to
String.
*/
Interestingly, the query still works:
sql"""
MERGE INTO fail(id, time, bool)
KEY (id)
VALUES ($uuid, $t, $b)
""".update.quick.unsafePerformIO
// 1 row(s) updated
sql"select * from fail".query[(java.util.UUID, Option[java.time.Instant], Boolean)].quick.unsafePerformIO
// (dc575322-b5ff-45c5-b11d-28a912614561,Some(2017-07-20T17:10:04.371Z),true)
Looking further, it seems that .check
on any MERGE INTO
query sees all database types as varchar
.
Adding this query to the h2types.scala
test suite also yields failure:
"pass query analysis for unascribed UUID" in {
val a = sql"select random_uuid()".query[UUID].analysis.transact(xa).unsafePerformIO
a.alignmentErrors must_== Nil
}
So at least I know UUIDs are somewhat broken.
I've created a branch with failing test cases demonstrating the issues above: ~~https://github.com/Jacoby6000/doobie/tree/h2-weirdness~~
I updated the branch with a different UUID meta type.
I've defined it like:
implicit val UuidType =
Meta.advanced[UUID](
NonEmptyListOf(jdbctype.Binary),
NonEmptyListOf("uuid", "UUID"),
_.getObject(_) match {
case null => null
case uuidObj =>
try uuidObj.asInstanceOf[UUID]
catch {
case e: ClassCastException => throw InvalidObjectMapping(classOf[UUID], uuidObj.getClass)
}
},
PS.setObject,
RS.updateObject
)
This fixes the issue with query analysis on UUID types, but still doesn't solve the merge problem. ~~With what I know about doobie, the problem with Merge seems more related to JDBC/H2 not reporting the correct column type for merge into columns. I'm not sure there's anything doobie can do about that.~~ Using jdbc directly, I'm able to get types to report properly.
I'm seeing the strange
[info] ✕ P02 Instant → VARCHAR (VARSTRING)
[info] - Instant is not coercible to VARCHAR (VARSTRING) according to the JDBC
[info] specification. Fix this by changing the schema type to TIMESTAMP, or the Scala
[info] type to String or Unparsed.
problem when just using an Instant
in a simple >=
query, noting
[info] ✓ C07 timestamp TIMESTAMP (TIMESTAMP) NOT NULL → Option[Instant]
From what I've gathered the analyzer thinks all types in updates are varchar. Queries do fine for whatever reason.
Invoking JDBC directly gives the correct metadata, so I'm not sure whats wrong here.
@fommil Is that in a normal insert/update, or is it a merge query?
I've gone ahead and added tests demonstrating this problem in a branch (deleted my old fork, so that old demonstration was dead. This is a better set up anyway):
https://github.com/Jacoby6000/doobie/blob/bug/h2-updates/modules/h2/src/test/scala/doobie/h2/h2types.scala
@Jacoby6000 that's just a regular .query
. According to Rob, MySql is unable to infer the type and I need to add a coersion... but I can't figure out how to do that in MySQL. TL;DR this might be a MySQL "feature" not a doobie bug.
I think it's safe to close this issue. The problem is that MySQL and I guess also H2 have awful JDBC drivers which report inaccurate information.
To add to the list, sql functions will also cause NOT NULL columns to report as NULL columns
SELECT id, MAX(id) as maxId from foo group by bar
maxId
will be reported as a nullable column.
I'm going to update the issue's opening message to include all of the places I've found so far which have issues.. Maybe I'll make a PR for the microsite for a page including quirks in the query analyzer (quirks in JDBC/SQL rather)
Also found the problem with H2 and numeric type. Table
create table contract(
customer_id int8 references customer,
product_id int8 references product,
amount numeric not null
);
Error on amount
field check:
BigDecimal is not coercible to DECIMAL (DECIMAL) according to the JDBC
specification. Expected schema type was NUMERIC.
java.lang.Exception: BigDecimal is not coercible to DECIMAL (DECIMAL) according to the JDBC
specification. Expected schema type was NUMERIC.
Yeah the query checking thing doesn't really work in general. It was an interesting experiment but I just don't think JDBC is up to the task.