gh-ost
gh-ost copied to clipboard
gh-ost aborts on trying to UPDATE, errors on invalid character
v1.0.21
gh-ost with --test-on-replica failed like this:
FATAL Error 1366: Incorrect string value: '\xE1...' for column 'column_name' at row 1; query=
column_name
is a latin1 VARCHAR and the value coming in the UPDATE statement contained this problematic character:
á
So this may be related to the safe SQL mode settings.
The binlogs showed this:
- first, a row with the relevant
id
is INSERTed into the ghost table. At the moment, the data contains no \xE1 character. - eventually, there is incoming UPDATE for the original table that introduces the \xE1 character. gh-ost refuses to handle it and aborts
I attempted to gh-ost ALTER a one row table with \xE1 already in the original table. This finished fine. So this sort of character is properly handled by gh-ost when it is INSERTing rows as part of table to table copy. But when the character comes from a binlog UPDATE it is not handled properly.
So it may be possible to construct a simple testcase over a small table when a problematic character is fed through the binlog. I need to learn how stop gh-ost at exactly the right moment. There will likely be no time for interactive command if the table is small.
@mizioumt thank you.
This is strange, because this bug was fixed in 1.0.21
, or at least so I thought. See https://github.com/github/gh-ost/pull/236 and https://github.com/github/gh-ost/pull/227.
gh-ost
supports localtests, which emulate the exact behavior you'd expect. Let me see if update
s are properly covered.
please share show create table
and the gh-ost
invocation you attempted.
If table description is too long, please just share the essentials.
See https://github.com/github/gh-ost/pull/291
the table description is too long and I can't guess what stuff is more essential than whichever other stuff. So here's the entire thing:
*************************** 1. row ***************************
Table: Table_name
Create Table: CREATE TABLE `Table_name` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`another_column` bigint(20) unsigned NOT NULL DEFAULT '0',
`column_1` int(10) unsigned NOT NULL DEFAULT '0',
`column_2` date NOT NULL DEFAULT '0000-00-00',
`column_3` date NOT NULL DEFAULT '0000-00-00',
`column_4` int(10) unsigned DEFAULT NULL,
`column_5` int(10) unsigned DEFAULT NULL,
`column_6` varchar(80) CHARACTER SET utf8mb4 DEFAULT NULL,
`column_7` smallint(5) unsigned NOT NULL DEFAULT '0',
`column_8` double NOT NULL DEFAULT '0',
`column_9` varchar(1024) DEFAULT NULL,
`column_10` varchar(1024) DEFAULT NULL,
`column_11` varchar(1024) DEFAULT NULL,
`column_12` enum(value_list) NOT NULL DEFAULT 'a_string',
`column_13` char(3) NOT NULL DEFAULT '',
`last_change` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`column_14` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`column_15` varchar(255) DEFAULT NULL,
`column_16` float NOT NULL DEFAULT '0',
`column_17` tinyint(3) unsigned NOT NULL DEFAULT '0',
`column_18` int(11) DEFAULT NULL,
`column_19` int(11) NOT NULL DEFAULT '0',
`column_20` double DEFAULT NULL,
`column_21` double DEFAULT NULL,
`column_22` date DEFAULT NULL,
`column_23` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`column_24` varchar(40) DEFAULT NULL,
`column_25` char(1) DEFAULT NULL,
`COLUMN_THAT_FAILED_TO_BE_UPDATED` text,
`column_26` enum(value_list) DEFAULT NULL,
`column_27` text CHARACTER SET utf8mb4,
`column_28` text CHARACTER SET utf8mb4,
`column_29` tinyint(4) NOT NULL DEFAULT '1',
`column_30` double NOT NULL DEFAULT '0',
`column_31` varchar(512) DEFAULT NULL,
`column_32` varchar(255) DEFAULT NULL,
`column_33` tinyint(4) NOT NULL DEFAULT '1',
`column_34` int(11) DEFAULT NULL,
`column_35` tinyint(3) unsigned DEFAULT NULL,
`column_36` tinyint(3) unsigned NOT NULL DEFAULT '0',
`column_37` bigint(20) unsigned DEFAULT NULL,
`column_38` varchar(8) DEFAULT NULL,
`column_39` bigint(20) unsigned NOT NULL DEFAULT '0',
`column_40` int(10) unsigned NOT NULL DEFAULT '0',
`column_41` smallint(5) unsigned NOT NULL DEFAULT '0',
`column_42` bigint(20) unsigned NOT NULL DEFAULT '0' COMMENT 'points_to_elsewhere',
`column_43` bigint(20) unsigned NOT NULL DEFAULT '0' COMMENT 'points_to_elsewhere',
`column_44` bigint(20) unsigned NOT NULL DEFAULT '0' COMMENT 'points_to_elsewhere',
PRIMARY KEY (`id`),
KEY `column_3` (`column_3`,`column_2`),
KEY `column_12` (`column_12`),
KEY `another_column` (`another_column`),
KEY `column_29` (`column_29`),
KEY `column_14` (`column_14`),
KEY `column_1` (`column_1`),
KEY `column_2` (`column_2`,`column_3`),
KEY `column_39` (`column_39`),
KEY `last_change` (`last_change`),
KEY `column_40_2` (`column_40`,`last_change`),
KEY `column_42` (`column_42`),
KEY `column_40` (`column_40`,`column_2`)
) ENGINE=InnoDB AUTO_INCREMENT=N DEFAULT CHARSET=latin1
and gh-ost was invoked as follows:
./gh-ost-20161013142859 -conf ~/.my_gh-ost.cnf -database database_name -table Table_name
-alter " modify column_1 bigint unsigned NOT NULL DEFAULT '0', drop column column_6,
add column x tinyint not null default 1 , drop key column_2, drop key column_39,
add key k(column_39,column_1), add key l(column_3,column_12,another_column,column_2)"
-test-on-replica -host `hostname` -initially-drop-ghost-table -initially-drop-socket-file
-serve-socket-file ~/gh-ost.hr.socket -assume-rbr --default-retries 1 --debug
--verbose --stack -execute
Not saying your alter isn't valid, but OMG.
I'm curious whether this is a production DDL you must perform, or is this a random collection of modifications, or maybe this is the accumulation of migration requests over months, or...?
It is very difficult to debug with such complexity. Again, this is valid and on my side I should try and find the problem. But in your side, you're experimenting with gh-ost, don't you feel it would be easier to try and isolate those migrations? Throwing out everything at once turns this hunt into needle in a haystack.
the table structure is real but the ALTER is not. I'm just testing gh-ost. I think this character bug will happen for any alter, even for a noop one as long as the right UPDATE comes through the binlog which is not a gimme. But I may be wrong. I'll try to reproduce it with something simple next week.
Is that the same setup where you don't have log_slave_updates
, as in https://github.com/github/gh-ost/issues/287#issuecomment-256903837 ?
no, on this one I always had log slave updates. In particular, when I browsed through the binlogs for the last reoccurrence I saw record for the offending id
for both the original and ghost tables.
FYI the sequence was:
- INSERT into dbname._TableName_gho
- UPDATE dbname.TableName -> that had the problematic character, gh-ost died
- more of UPDATE dbname.TableName but no more on the gh-ost table because gh-osd had died
replicationwise this is a sibling of the host from #287 however on this one MySQL version is lower
select @@version,@@log_slave_updates;
+------------+---------------------+
| @@version | @@log_slave_updates |
+------------+---------------------+
| 5.6.33-log | 1 |
+------------+---------------------+
I'm trying to ensure gh-ost works on 5.6 and 5.7, also in different TZ
I believe we've just run into the same issue while trying to run a migration on a large latin1-encoded table that contains UTF8 characters. The plan has been to use gh-ost to run migrations which will convert our tables so that they are stored and labeled correctly as UTF8. However, until we reach that state, we have many rows containing UTF8 data stored in latin1-labeled columns. That data is perfectly readable over a latin1 connection, but would become incorrectly re-converted under a UTF8 connection, rendering garbage characters.
The problem we're facing is that during a migration run ("MODIFY mycol TINYINT(1) DEFAULT '0'"), gh-ost reached one of those rows containing UTF8 data, and we observed the following:
- gh-ost correctly copied the original row to _mytable_gho
- The app updated the row, changing two non-character columns
- gh-ost picked up the change over the replication log and attempted to rewrite the entire row, UTF8 data included, and hit an encoding error:
`2016-10-31 19:20:28 ERROR Error 1366: Incorrect string value: '\xEF\xBF\xBD\xC3\xA0\xC2...' for column 'mycol' at row 1; query= update /* gh-ost `mydb`.`_mytable_gho` */ `mydb`.`_mytable_gho`
set
`id`=?, <columns = ?>..., `mycol`=? where
((`id` = ?))
; args=[64938571<...> สำหรับ "บางคน" เà¸�ิดมาเพà
¸·à¹ˆà¸à¹ƒà¸«à¹‰à¹€à¸£à¸²à¸£à¸±à¸�]
The garbage text above is how it appears rendered under a UTF8 connection; under latin1 is it: สำหรับ "บางคน" เกิดมาเพื่อให้เรารัก
The hex value is: E0B8AAE0B8B3E0B8ABE0B8A3E0B8B1E0B89A2022E0B89AE0B8B2E0B887E0B884E0B8992220E0B980E0B881E0B8B4E0B894E0B8A1E0B8B2E0B980E0B89EE0B8B7E0B988E0B8ADE0B983E0B8ABE0B989E0B980E0B8A3E0B0B8B1E0B881
After that error, gh-ost kept repeating the following:
2016-10-31 19:21:20 ERROR Error 1205: Lock wait timeout exceeded; try restarting transaction; query= update /* gh-ost `mydb`.`_mytable_gho` */ `mydb`.`_mytable_gho` ... <same update as above>
Eventually, I had to kill the migration.
Here is the command I used:
gh-ost --max-load=Threads_running=7 --critical-load=Threads_running=25 --chunk-size=1000 --max-lag-millis=2000 --verbose --assume-rbr --cut-over=default --ok-to-drop-table --exact-rowcount --concurrent-rowcount --default-retries=120 --panic-flag-file=/tmp/ghost.panic.flag --postpone-cut-over-flag-file=/tmp/ghost.postpone.flag --serve-socket-file=/tmp/gh-ost.sock --initially-drop-ghost-table --throttle-control-replicas="<slave1>,<slave2>" --user=<user> --password=<pass> --host=<slave1> --database=<mydb> --table=<mytable> --alter="MODIFY <mycol> tinyint(1) DEFAULT '0'" --execute
It's also worth noting that the database and server default charset is utf8, but all apps connect to it as latin1 at this time.
@jeisen thank you for the detailed comment. I'm going to try and reproduce this; however I'm not too certain I understand the meaning of having utf8
characters in a latin1
charset text. Are you able to provide me with a single utf8
character that you store in latin1
that I can experiment with?
@jeisen I'm able to reproduce this now. I'll look into it; right now I'm unsure how to correctly solve generically.
Reproduced in https://github.com/github/gh-ost/pull/303
To give a little more context, the database is entirely latin1 for historical reasons that no longer apply. The application connects to the DB using a latin1 connection, but receives UTF8 data as user input. The app has no idea the data is in a different character set, and just sees it as a sequence of bytes, so it writes those bytes out to the DB in "latin1", because MySQL doesn't check to see if the characters are valid when in latin1. So the HEX() value of this data in latin1 would be identical to if it were written over a UTF8 connection to a UTF8-labeled column.
mysql> set names latin1;
Query OK, 0 rows affected (0.00 sec)
mysql> select mycol from mytable where id=1\G
*************************** 1. row ***************************
mycol: สำหรับ "บางคน" เกิดมาเพื่อให้เรารัก
1 row in set (0.00 sec)
The main problem is that if you were to connect over a UTF8 connection, MySQL will attempt to convert the "latin1" data. Instead of returning the correct string of bytes, it would send a series of garbage characters that are the result of running them through the UTF8 conversion.
mysql> set names utf8;
Query OK, 0 rows affected (0.00 sec)
mysql> select mycol, from mytable where id=1\G
*************************** 1. row ***************************
mycol: สำหรับ "บางคน" เà¸à¸´à¸”มาเพื่à¸à¹ƒà¸«à¹‰à¹€à¸£à¸²à¸£à¸±à¸
1 row in set (0.00 sec)
You can see the severity of the problem by looking at the HEX() values before and after running it through CONVERT():
mysql> select hex(mycol), hex(convert(mycol using utf8)) from mytable where id=1\G
*************************** 1. row ***************************
hex(mycol): E0B8AAE0B8B3E0B8ABE0B8A3E0B8B1E0B89A2022E0B89AE0B8B2E0B887E0B884E0B8992220E0B980E0B881E0B8B4E0B894E0B8A1E0B8B2E0B980E0B89EE0B8B7E0B988E0B8ADE0B983E0B8ABE0B989E0B980E0B8A3E0B8B2E0B8A3E0B8B1E0B881
hex(convert(mycol using utf8)): C3A0C2B8C2AAC3A0C2B8C2B3C3A0C2B8C2ABC3A0C2B8C2A3C3A0C2B8C2B1C3A0C2B8C5A12022C3A0C2B8C5A1C3A0C2B8C2B2C3A0C2B8E280A1C3A0C2B8E2809EC3A0C2B8E284A22220C3A0C2B9E282ACC3A0C2B8C281C3A0C2B8C2B4C3A0C2B8E2809DC3A0C2B8C2A1C3A0C2B8C2B2C3A0C2B9E282ACC3A0C2B8C5BEC3A0C2B8C2B7C3A0C2B9CB86C3A0C2B8C2ADC3A0C2B9C692C3A0C2B8C2ABC3A0C2B9E280B0C3A0C2B9E282ACC3A0C2B8C2A3C3A0C2B8C2B2C3A0C2B8C2A3C3A0C2B8C2B1C3A0C2B8C281
1 row in set (0.00 sec)
Worse, if you try to directly run an ALTER changing the charset of the column, it would automatically run that CONVERT() step, permanently corrupting the data. The solution is that we need to force MySQL to forget its charset label by temporarily changing the VARCHAR into a VARBINARY. Once that's done, the app can start connecting via a UTF8 connection and the data will still be in the correct encoding. Then, we can do a second migration to change the VARBINARY back into a VARCHAR, this time with the right label, and we're good moving forward.
Note to self: We will likely want to add a note about the outcome of this to the docs (whether double encoding is a caveat, or requires a special flag, or if there's a feature to fix it).
I'm not sure how common double encoding still is, but it certainly was a problem years ago.
@mizioumt can you confirm your case is similar to @jeisen's? i.e. you have utf8
characters in your latin1
charset column?
If not, these are two different issues.
w/r utf8
in a latin1
charset, I'm unsure yet what the correct handling should be. How can I know in advance what's written in the column? An option might be letting the user specify a --default-charset=latin1
flag or similar. Another option might be to attempt to copy texts as binaries, in the event a column's charset is identical before/after migration.
I have the code to be able to write back the binary
value of a text column in the event it never changed. If succeeds for utf8
column, succeeds for utf8
characters in latin1
columns, and fails on pure-latin1
columns.
Here's the problem: I cannot identify those cases beforehand. Even if the user submits some flag, there may still be different columns in the migrated table, which behave differently. So "fix" to one would break another. I would certainly not want the user to specify per-column configs such as --oh-and-turn-this-column-to-utf8-latin1=name_of_column
.
By the time I read the column value it has already been converted to utf8
in the binary log (binlog events always write strings in utf8
).
Unsure how this should be tackled.
I think my problem is kind of independent from problem reported by @jeisen. My problem is that gh-ost refuses to handle at least one character that does belong to latin1:
E1 á 00E1 LATIN SMALL LETTER A WITH ACUTE
based on http://dev.mysql.com/doc/refman/5.7/en/charset-we-sets.html and http://www.cp1252.com/ and indeed it's a part of a common name (not English, of course) which contains that
So I simply want this character to stay the same. The UTF considerations I think are beyond the scope of gh-ost I think. If the column is latin1 and we do not ask gh-ost to change the charset on it, then it should stay binary identical to the original. If we do change the charset and/or how to deal with a situation that the latin1 looking characters are part of UTF sequences then I could express an opinion. However in my case the column stays latin1 so UTF does not really play a role.
@mizioumt then I will need you to be able to reproduce this. Please consider existing tests that confirm gh-ost
does indeed support latin1
. There could be an edge case that I don't understand, but then again, I cannot reproduce the problem on your side.
@shlomi-noach If I were to strip away any unicode beforehand and leave only "pure" latin1 in the VARCHARs, would you consider gh-ost still safe to use on an ALTER table CHARACTER SET utf8 COLLATE utf8_unicode_ci (when the table starts as latin1)?
would you consider gh-ost still safe to use on an ALTER table CHARACTER SET utf8 COLLATE utf8_unicode_ci (when the table starts as latin1)?
Yes, this conversion is supported and tested. I strongly urge you to --test-on-replica
(I don't think you have done so in the above) and feel safe about this operation before running this on master.
Unless I get good insight on how to reliably detect latin1
columns that happen to have utf8
characters inside, as opposed to latin1
columns that do not have utf8
characters inside, I'm going to put the utf8
-in-latin1
issue aside, as it is currently not in our focus.
If anyone else want to pick this up, there's initial code to check/support this scenario, in the form of "not converting when reading and not converting when writing", which breaks "normal" latin1
. See some messy experiments in https://github.com/github/gh-ost/pull/303
reproduces with 1.0.28, will try to find time to create a simple testcase
looks like to get a simple testcase is not as easy as I thought...
@mizioumt let's take a step back, then.
The particular character you mention, á
, is explicitly tested in https://github.com/github/gh-ost/blob/master/localtests/latin1/create.sql
Do you know what the character set is for the connection that generates the UPDATE
statement? Is it utf8
?
Is it a perl
client? Other?
Answering a previous question of yours:
So I simply want this character to stay the same. The UTF considerations I think are beyond the scope of gh-ost I think. If the column is latin1 and we do not ask gh-ost to change the charset on it, then it should stay binary identical to the original. If we do change the charset and/or how to deal with a situation that the latin1 looking characters are part of UTF sequences then I could express an opinion. However in my case the column stays latin1 so UTF does not really play a role.
As per a previous comment I made, utf8
does play a role, because strings are always utf8
in the binary log. So whether you and I like it or not, your text is converted to utf8
in the binlogs, from where gh-ost
reads, and converts back to latin1
.
Lastly, can you confirm once again table definition on master & replica is identical? Perhaps a character set is changed between the two?
- we do SET NAMES utf8mb4 upon connection
- the table structures are identical, the tables have DEFAULT CHARSET=latin1, the column is defined as simply 'text'
I found out a little more about the application:
- column T1.c1 is utf8mb4, column T2.c2 is latin1
- a value is inserted into T1.c1 that contains the offending character such as a with acute accent, or similar
- this value is copied to T2.c2. This is done in such a way as to trigger an SQL warning on 'invalid character'. The result is still a good latin1
- It is T2 that I'm trying to ALTER with gh-ost and fail.
In my attempts to introduce I was unable to trigger that warning and I think this is why I was unable to reproduce the issue in a simple way so far. So as soon as I manage to get that warning I must be able to reproduce this. A direct insert in mysql client in to a latin1 column of a value such as 'EL ESPAÑOL' does not trigger the warning.
So the warningless and warningful assignment must be replicated differently somehow
@mizioumt allow me to break down and reorder your own comment, to confirm I understand it correctly:
- You are running
gh-ost
on tableT2
, which has columnc2 ... CHARACTER SET latin1
-
gh-ost
exits with error when you try to copy data fromT1
toT2
, where the data comes from autf8mb4
column, using a direct SQL statement such as:
UPDATE T1, T2 SET T2.c2 = T1.c1 WHERE <something>
Did I make this right?
well on the second point I do not know how that copy is implemented exactly but logically this is correct. This chain has ROW replication throughout so it's difficult to get the exact SQL involved. But not impossible, I just need to find time to use sources other than binlogs. I'm a bit at a loss as to how this error/warning can show up upon a latin1 column, though, if latin1 covers all of 0-255 single characters.
@mizioumt if the scenario is roughly as I describe, then I think it is similar to the case presented by @jeisen. You are copying utf8mb4
values onto latin1
and I suspect there's double encoding there.
I will potentially not fix this; but let me see how easy for me it is to build a test case for this, next week,