XA 事务异常问题

方便更快捷的说明问题,可以按需填写(可删除)

使用环境:

Server version: 5.7.22-ShardingSphere-Proxy 5.1.2-SNAPSHOT-dirty-0f6fba9 Source distribution
5.1.1版本同样出现错误,错误类型不同

场景、问题:

我有两个逻辑数据库 sharding_db 和 sharding_db2
sharding_db 对应真实数据库 ds_1 和 ds_2
sharding_db2 对应真实数据库 ds_1 和 ds_2

sharding_db 和 sharding_db2 完全一样

两个proxy 连接
connection 1 : 结果正常
mysql> use sharding_db;

Database changed
mysql> begin;
Query OK, 0 rows affected (0.06 sec)
mysql> Insert into tbl_db_migrate_record(id, msg3, msg, record) values (146, ‘146’, ‘test’, ‘[1,2,3]’),(139, ‘139’, ‘test’, ‘[1,2,3]’);
ERROR 1062 (23000): Duplicate entry ‘146’ for key ‘PRIMARY’
mysql> rollback;
Query OK, 0 rows affected (0.21 sec)

connection 2 : 运行时异常
mysql> use sharding_db2;

Database changed

mysql> begin;

Query OK, 0 rows affected (0.01 sec)

mysql> Insert into tbl_db_migrate_record(id, msg3, msg, record) values (146, ‘146’, ‘test’, ‘[1,2,3]’),(139, ‘139’, ‘test’, ‘[1,2,3]’);

ERROR 1997 (C1997): Runtime exception: [null]

现状:

java.lang.NullPointerException: null
at org.apache.shardingsphere.transaction.xa.XAShardingSphereTransactionManager.getConnection(XAShardingSphereTransactionManager.java:80)
at org.apache.shardingsphere.proxy.backend.communication.jdbc.datasource.JDBCBackendDataSource.createConnection(JDBCBackendDataSource.java:114)
at org.apache.shardingsphere.proxy.backend.communication.jdbc.datasource.JDBCBackendDataSource.getConnections(JDBCBackendDataSource.java:81)
at org.apache.shardingsphere.proxy.backend.communication.jdbc.datasource.JDBCBackendDataSource.getConnections(JDBCBackendDataSource.java:55)
at org.apache.shardingsphere.proxy.backend.communication.jdbc.connection.JDBCBackendConnection.createNewConnections(JDBCBackendConnection.java:101)
at org.apache.shardingsphere.proxy.backend.communication.jdbc.connection.JDBCBackendConnection.getConnections(JDBCBackendConnection.java:91)
at org.apache.shardingsphere.infra.executor.sql.prepare.driver.DriverExecutionPrepareEngine.group(DriverExecutionPrepareEngine.java:88)
at org.apache.shardingsphere.infra.executor.sql.prepare.AbstractExecutionPrepareEngine.prepare(AbstractExecutionPrepareEngine.java:68)
at org.apache.shardingsphere.proxy.backend.communication.ProxySQLExecutor.useDriverToExecute(ProxySQLExecutor.java:158)
at org.apache.shardingsphere.proxy.backend.communication.ProxySQLExecutor.execute(ProxySQLExecutor.java:125)
at org.apache.shardingsphere.proxy.backend.communication.ProxySQLExecutor.execute(ProxySQLExecutor.java:119)
at org.apache.shardingsphere.proxy.backend.communication.jdbc.JDBCDatabaseCommunicationEngine.execute(JDBCDatabaseCommunicationEngine.java:144)
at org.apache.shardingsphere.proxy.backend.communication.jdbc.JDBCDatabaseCommunicationEngine.execute(JDBCDatabaseCommunicationEngine.java:74)
at org.apache.shardingsphere.proxy.backend.text.data.impl.SchemaAssignedDatabaseBackendHandler.execute(SchemaAssignedDatabaseBackendHandler.java:56)
at org.apache.shardingsphere.proxy.frontend.mysql.command.query.text.query.MySQLComQueryPacketExecutor.execute(MySQLComQueryPacketExecutor.java:97)
at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.executeCommand(CommandExecutorTask.java:107)
at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.run(CommandExecutorTask.java:77)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

connection1 是预期的吗?

这两个逻辑库使用配置文件还是distsql 配置的?

对,connection1是预期的

是配置文件配置了

XA事务应该是保障逻辑库的数据一致性,对每个逻辑库而言都是支持的,但是现在就是它好像只能支持一个逻辑库,其他逻辑库是不行的。这里的执行没有跨逻辑库


这里的engine居然是单例的,我不能理解
contextManagerInitializedCallback执行后transactionManager被清空了

这个清空并不是每个链接失败就会清空,单例中存的是datasource,不是connection,使用Narayana 模式应该是没问题的,你使用的是 Narayana 模式吗?

我试了下 Atomikos ,也没有问题,还有其他的特殊配置吗?

XA 模式



主要cachedatasource 里对于每个datasourceName 都是一样的 ,我觉得不合理,

如果两个 logical schema 配置是一样的,cachedatasource 里面确实是一样的,如果不合理,有其他的想法吗?

schemaName: sharding_db 里有ds_0 ds_1 ds_2 shadow_ds_0 shadow_ds_1
sharding_db2 里只有ds_0 ds_1

这是server.yaml 的配置,config-sharding.yaml 的配置是什么样的?

schemaName: sharding_db2
dataSources:
  dss_0:
    
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  dss_1:
    

    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

rules:
  - !SHARDING
    tables:
      tbl_db_migrate_record:
        actualDataNodes: dss_${0..1}.tbl_db_migrate_record
        keyGenerateStrategy:
          column: id
          keyGeneratorName: snowflake
      t_order:
        actualDataNodes: ds_${0..1}.t_order_${0..1}
        tableStrategy:
          standard:
            shardingColumn: order_id
            shardingAlgorithmName: t_order_inline
        keyGenerateStrategy:
          column: order_id
          keyGeneratorName: snowflake
      t_order_item:
        actualDataNodes: ds_${0..1}.t_order_item_${0..1}
        tableStrategy:
          standard:
            shardingColumn: order_id
            shardingAlgorithmName: t_order_item_inline
        keyGenerateStrategy:
          column: order_item_id
          keyGeneratorName: snowflake
    bindingTables:
      - t_order,t_order_item
    defaultDatabaseStrategy:
      standard:
        shardingColumn: id
        shardingAlgorithmName: database_inline
    defaultTableStrategy:
      none:

    shardingAlgorithms:
      database_inline:
        type: INLINE
        props:
          algorithm-expression: dss_${id % 2}
      t_order_inline:
        type: INLINE
        props:
          algorithm-expression: t_order_${order_id % 2}
      t_order_item_inline:
        type: INLINE
        props:
          algorithm-expression: t_order_item_${order_id % 2}

    keyGenerators:
      snowflake:
        type: SNOWFLAKE

    scalingName: default_scaling
    scaling:
      default_scaling:
        input:
          workerThread: 40
          batchSize: 1000
        output:
          workerThread: 40
          batchSize: 1000
        streamChannel:
          type: MEMORY
          props:
            block-queue-size: 10000
        completionDetector:
          type: IDLE
          props:
            incremental-task-idle-minute-threshold: 30
        dataConsistencyChecker:
          type: DATA_MATCH
          props:
            chunk-size: 1000

schemaName: sharding_db
dataSources:
  ds_0:


    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
  ds_1:


    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

  ds_2:


    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

  shadow_ds_0:


    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

  shadow_ds_1:


    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1
rules:
  - !READWRITE_SPLITTING
    dataSources:
      rw_ds_0:
        type: Static
        props:
          write-data-source-name: ds_0
          read-data-source-names: shadow_ds_0
      rw_ds_1:
        type: Static
        props:
          write-data-source-name: ds_1
          read-data-source-names: shadow_ds_1
  - !SHARDING
    tables:
      tbl_db_migrate_record:
        actualDataNodes: rw_ds_${0..1}.tbl_db_migrate_record , ds_2.tbl_db_migrate_record
        #      actualDataNodes: ds_${0..1}.tbl_db_migrate_record
        keyGenerateStrategy:
          column: id
          keyGeneratorName: snowflake
        databaseStrategy:
          standard:
            shardingColumn: id
            shardingAlgorithmName: database_inline
        #        hint:
        #          shardingAlgorithmName: database-defaults
        tableStrategy:
          standard:
            shardingColumn: id
            shardingAlgorithmName: database_inline2
      t_order:
        actualDataNodes: ds_${0..1}.t_order_${0..1}
        tableStrategy:
          standard:
            shardingColumn: order_id
            shardingAlgorithmName: t_order_inline
        keyGenerateStrategy:
          column: order_id
          keyGeneratorName: snowflake
      t_order_item:
        actualDataNodes: ds_${0..1}.t_order_item_${0..1}
        tableStrategy:
          standard:
            shardingColumn: order_id
            shardingAlgorithmName: t_order_item_inline
        keyGenerateStrategy:
          column: order_item_id
          keyGeneratorName: snowflake

    bindingTables:
      - t_order,t_order_item
    defaultDatabaseStrategy:
      standard:
        shardingColumn: id
        shardingAlgorithmName: database_inline
    defaultTableStrategy:
      none:

    shardingAlgorithms:
      database_inline:
        type: INLINE
        props:
          algorithm-expression: rw_ds_${id % 2}
          allow-range-query-with-inline-sharding: true
      database_inline2:
        type: INLINE
        props:
          algorithm-expression: tbl_db_migrate_record
          allow-range-query-with-inline-sharding: true
      t_order_inline:
        type: INLINE
        props:
          algorithm-expression: t_order_${order_id % 2}
      t_order_item_inline:
        type: INLINE
        props:
          algorithm-expression: t_order_item_${order_id % 2}
      database-defaults:
        type: HINT_INLINE
        props:
          algorithm-expression: ${value}
    keyGenerators:
      snowflake:
        type: SNOWFLAKE

    scalingName: default_scaling
    scaling:
      default_scaling:
        input:
          workerThread: 40
          batchSize: 1000
        output:
          workerThread: 40
          batchSize: 1000
        streamChannel:
          type: MEMORY
          props:
            block-queue-size: 10000
        completionDetector:
          type: IDLE
          props:
            incremental-task-idle-minute-threshold: 30
        dataConsistencyChecker:
          type: DATA_MATCH
          props:
            chunk-size: 1000

shardingdb2之前是ds ,改成dss 区分,结果shardingdb2也包含ds1、ds2 。不仅如此,系统库都包含

NPE 那个地方,datasourceName 是什么?是 sharding_db,sharding_db2中的一个吗?

是的,你use db 选择哪个, 就会从transactionManager中拿到datasourceName 对应的engine

cachedatasource 中只有sharding_db 的?


latest版本这里有问题,我把它从getServiceInstance 换成 newServiceInstance
正式版本5.1.1这里是正常的

京ICP备2021015875号