tidb
tidb copied to clipboard
Unique key lock not consistent when row is changed or not
Bug Report
Please answer these questions before submitting your issue. Thanks!
1. Minimal reproduce step (Required)
mysql> show create table t;
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| t | CREATE TABLE `t` (
`i` varchar(20) NOT NULL,
PRIMARY KEY (`i`) /*T![clustered_index] NONCLUSTERED */
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin |
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> insert into t values ('a');
Query OK, 0 rows affected (0.00 sec)
Rows matched: 1 Changed: 0 Warnings: 0
mysql> begin;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from t;
+---+
| i |
+---+
| a |
+---+
1 row in set (0.00 sec)
mysql> update t set i = 'a'; -- insert into t values ('a') in another session
Query OK, 0 rows affected (0.00 sec)
Rows matched: 1 Changed: 0 Warnings: 0
mysql> update t set i = 'b'; -- insert into t values ('b') in another session
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> rollback;
Query OK, 0 rows affected (0.00 sec)
2. What did you expect to see? (Required)
The first insert in another session should wait lock.
3. What did you see instead (Required)
ERROR 1062 (23000): Duplicate entry 'a' for key 'PRIMARY'
4. What is your TiDB version? (Required)
01d02568c25946684b6c4cc4cc5343f6d2068a99
It is caused by https://github.com/pingcap/tidb/blob/f0717dfe07ae4842c7731b23368e664801d2bd42/sessionctx/variable/session.go#L251, it only collect row, so the unique index is missing to lock.
After this for such execution plans:
update uk = uk where sk = 1; // The uk value is not changed
The plan:
update
select lock
index look up
The unique key could be locked but https://github.com/pingcap/tidb/pull/25730 would not work so the LOCK record could accumulate and lead to performance issues acquiring the pessimistic locks.
@zyguan We may need to find out another solution to avoid the performance issues, maybe it's needed to revert the fix first before that.
After this for such execution plans:
update uk = uk where sk = 1; // The uk value is not changed The plan: update select lock index look up
The unique key could be locked but #25730 would not work so the LOCK record could accumulate and lead to performance issues acquiring the pessimistic locks.
#25730 doesn't work because it's not a point plan. A LOCK record will be left on row key as well (not only unique key) if nothing updated. But it's usually not a problem since row values are more likely to be changed. Maybe we can convert lock records to put records as #25730 did. However, it seems more difficult to construct uk values here.
I also notice that #36498 doesn't work for the following case...
/* s0 */ drop table if exists t;
-- s0 >> 0 rows affected
/* s0 */ create table t (k varchar(10), v int, primary key (k), key idx(k));
-- s0 >> 0 rows affected
/* s0 */ insert into t values ('a', 10);
-- s0 >> 1 rows affected
/* s1 */ begin pessimistic;
-- s1 >> 0 rows affected
/* s1 */ update t force index(idx) set v = 11 where k = 'a';
-- s1 >> 1 rows affected
/* s2 */ insert into t values ('a', 100);
-- s2 >> E1062: Duplicate entry 'a' for key 'PRIMARY'
/* s1 */ commit;
-- s1 >> 0 rows affected
There could be some cases the accumulating LOCK records may introduce performance issues, actually it's found when we're doing some simulating benchmark tests(/cc @MyonKeminta ). Compared with the not-locked issue, the performance issue may introduce much more production risks, I prefer not to add more locks before we solve the accumulating problem completely. Also the not lock issue exist in all the previous versions and it seems to be not a big problem.
There could be some cases the accumulating LOCK records may introduce performance issues, actually it's found when we're doing some simulating benchmark tests(/cc @MyonKeminta ). Compared with the not-locked issue, the performance issue may introduce much more production risks, I prefer not to add more locks before we solve the accumulating problem completely. Also the not lock issue exist in all the previous versions and it seems to be not a big problem.
Agreed.
It seems an ultimate solution to the lock accumulation problem is necessary for us. According to our previous research, it will take us a lot of efforts, but considering our future, I think it worth.