The data source is messed up.
Bug Report
The data source is messed up
Which version of ShardingSphere did you use?
master
Which project did you use? ShardingSphere-JDBC or ShardingSphere-Proxy?
ShardingSphere-Proxy
Actual behavior
Data that should have been inserted into another data source is inserted into another data source.
Reason analyze (If you can)
In a transaction, different logical libraries are operated, and the wrong data source is obtained due to the existence of cachedConnections.
Steps to reproduce the behavior, such as: SQL to execute, sharding rule configuration, when exception occur etc.
mysql> begin;
ERROR 2013 (HY000): Lost connection to MySQL server during query
No connection. Trying to reconnect...
Connection id: 1
Current database: *** NONE ***
Query OK, 0 rows affected (0.61 sec)
mysql> insert into sharding_blob.t_order_0 values(132, 48,12,3);
Query OK, 1 row affected (0.22 sec)
mysql> insert into sharding_db2.t_order values(122, 48,12,3);
values(122, 48,12,3); need insert into xx.xx.xx.34:3366, but insert into xx.xx.49:3306;
Example codes for reproduce this issue (such as a github link).
databaseName: sharding_blob
dataSources:
ds_0:
url: jdbc:mysql://xx.xx.49:3306/demo_ds_0?serverTimezone=UTC&useSSL=false
username: test
password: 123456
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
databaseName: sharding_db2
dataSources:
ds_0:
url: jdbc:mysql://xx.xx.xx.34:3366/demo_ds_0?serverTimezone=UTC&useSSL=false
username: test
password: 123456
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
ds_1:
url: jdbc:mysql://xx.xx.xx.49:3306/demo_ds_1?serverTimezone=UTC&useSSL=false
username: test
password: 123456
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
rules:
- !SHARDING
broadcastTables:
- t_order_9999
tables:
t_order:
actualDataNodes: ds_${0..1}.t_order_${0..1}
tableStrategy:
standard:
shardingColumn: order_id
shardingAlgorithmName: t_order_inline
keyGenerateStrategy:
column: order_id
keyGeneratorName: snowflake
t_order_item:
actualDataNodes: ds_${0..1}.t_order_item_${0..1}
tableStrategy:
standard:
shardingColumn: order_id
shardingAlgorithmName: t_order_item_inline
keyGenerateStrategy:
column: order_item_id
keyGeneratorName: snowflake
defaultDatabaseStrategy:
standard:
shardingColumn: user_id
shardingAlgorithmName: database_inline
defaultTableStrategy:
none:
shardingAlgorithms:
database_inline:
type: INLINE
props:
algorithm-expression: ds_${user_id % 2}
allow-range-query-with-inline-sharding: true
t_order_inline:
type: INLINE
props:
algorithm-expression: t_order_${order_id % 2}
allow-range-query-with-inline-sharding: true
t_order_item_inline:
type: INLINE
props:
algorithm-expression: t_order_item_${order_id % 2}
allow-range-query-with-inline-sharding: true
keyGenerators:
snowflake:
type: SNOWFLAKE
I want to investigate this issue.
@natehuangting Thank you for your feedback. Currently, ShardingSphere transactions only support transactions in the same logical database. Transaction support across multiple logic database requires @FlyingZC to help investigate the solution.
ShardingSphere transactions can support multiple logical databases now, but now there is a problem with handling data sources with the same name under different logical databases.