mysql_fdw
mysql_fdw copied to clipboard
please add support for binary/varbinary mysql data types
support mysql binary/varbinary data types in mysqlImportForeignSchema().
currently - mysql_fdw doesn't apply any mapping to Postgres for these types, which results in error when running "IMPORT FOREIGN SCHEMA".
These types should map quite well to Postgres "BYTEA" type. This is two line change at LL1906 in mysql_fdw.c (added binary, varbinary):
appendStringInfo(&buf,
" SELECT"
" t.TABLE_NAME,"
" c.COLUMN_NAME,"
" CASE"
" WHEN c.DATA_TYPE = 'enum' THEN LOWER(CONCAT(c.COLUMN_NAME, '_t'))"
" WHEN c.DATA_TYPE = 'tinyint' THEN 'smallint'"
" WHEN c.DATA_TYPE = 'mediumint' THEN 'integer'"
" WHEN c.DATA_TYPE = 'tinyint unsigned' THEN 'smallint'"
" WHEN c.DATA_TYPE = 'smallint unsigned' THEN 'integer'"
" WHEN c.DATA_TYPE = 'mediumint unsigned' THEN 'integer'"
" WHEN c.DATA_TYPE = 'int unsigned' THEN 'bigint'"
" WHEN c.DATA_TYPE = 'bigint unsigned' THEN 'numeric(20)'"
" WHEN c.DATA_TYPE = 'double' THEN 'double precision'"
" WHEN c.DATA_TYPE = 'float' THEN 'real'"
" WHEN c.DATA_TYPE = 'datetime' THEN 'timestamp'"
" WHEN c.DATA_TYPE = 'longtext' THEN 'text'"
" WHEN c.DATA_TYPE = 'mediumtext' THEN 'text'"
" WHEN c.DATA_TYPE = 'blob' THEN 'bytea'"
" WHEN c.DATA_TYPE = 'mediumblob' THEN 'bytea'"
" WHEN c.DATA_TYPE = 'binary' THEN 'bytea'"
" WHEN c.DATA_TYPE = 'varbinary' THEN 'bytea'"
" ELSE c.DATA_TYPE"
" END,"
" c.COLUMN_TYPE,"
" IF(c.IS_NULLABLE = 'NO', 't', 'f'),"
" c.COLUMN_DEFAULT"
" FROM"
" information_schema.TABLES AS t"
" JOIN"
" information_schema.COLUMNS AS c"
" ON"
" t.TABLE_CATALOG <=> c.TABLE_CATALOG AND t.TABLE_SCHEMA <=> c.TABLE_SCHEMA AND t.TABLE_NAME <=> c.TABLE_NAME"
" WHERE"
" t.TABLE_SCHEMA = '%s'",
stmt->remote_schema);
Here is my test (not very complete):
mysql> desc outbound_kafka_queue;
+------------+---------------------+------+-----+-------------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+---------------------+------+-----+-------------------+----------------+
| id | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| topic | varchar(128) | YES | | NULL | |
| item_key | varchar(128) | YES | | NULL | |
| item_body | varbinary(60000) | YES | | NULL | |
| created_at | timestamp | NO | | CURRENT_TIMESTAMP | |
+------------+---------------------+------+-----+-------------------+----------------+
5 rows in set (0.00 sec)
in postgres:
postgres=# \d mysql_postgres.outbound_kafka_queue
Foreign table "mysql_postgres.outbound_kafka_queue"
Column | Type | Collation | Nullable | Default | FDW options
------------+-----------------------------+-----------+----------+---------+-------------
id | bigint | | not null | |
topic | character varying(128) | | | |
item_key | character varying(128) | | | |
item_body | bytea | | | |
created_at | timestamp without time zone | | not null | |
Server: mysql_server
FDW options: (dbname 'postgres', table_name 'outbound_kafka_queue')
postgres=# select * from mysql_postgres.outbound_kafka_queue;
id | topic | item_key | item_body | created_at
----+-------+----------+------------------------------+---------------------
1 | ttt1 | key123 | \x6468666b736864666b7364666b | 2017-12-21 17:47:30
2 | ttt2 | key1234 | \x616161 | 2017-12-21 17:56:28
(2 rows)
@slava-pagerduty: You might have more chances if you create an actual pull request for the change.
@ChristophBerg, sorry if my question is silly.. but how do I create PR if I can't push my branch (do not have permissions).
@slava-pagerduty: You need to "Fork" the project to your GitHub account first. (Button at the top right.)
Done: https://github.com/EnterpriseDB/mysql_fdw/pull/190