单实例迁移到proxy问题咨询

还有一个问题需要反馈一下,在增量和全量迁移的时候 发现一致性检查有问题(在全量和增量拷贝过程中一直有用sysbench压测):

日志也没看出有什么问题:

表结构:
CREATE TABLE sbtest3 (
id int(11) NOT NULL AUTO_INCREMENT,
k int(11) NOT NULL DEFAULT ‘0’,
c char(120) NOT NULL DEFAULT ‘’,
pad char(60) NOT NULL DEFAULT ‘’,
PRIMARY KEY (id),
KEY k_3_sbtest3 (k)
) ENGINE=InnoDB AUTO_INCREMENT=1000001 DEFAULT CHARSET=utf8mb4

分片键 选字段k为分片键,操作步骤如下:

上游mysql,停止写入一段时间执行如下命令,发现校验数据不一致

测试版本是 5.1.0 么?这个版本还没有实际的停写功能,需要人工保证停写。可以试试新的 5.1.1 版本。

您的意思是 5.1.0版本只能在没有写入的情况下进行迁移?
因为我只是在迁移的过程中有写入,然后状态为EXECUTE_INCREMENTAL_TASK 上游写入我是停止的,按道理等一段时间增量是会过来的对吧,但是现在发现是目标段的数据竟然比mysql的还多,比较奇怪

做数据一致性校验和切换元数据的时候需要停写,其他时候没问题。

你可以手动查询下表的记录数,以及目标端各个分表是否有重复主键记录。

确实不同分片之间,有很多冲突的记录,这种方式如何处理 ,5.1.1能解决这个问题吗

image

这个错误日志有前后上下文么?

这可能还是配置问题。
我之前测试过sysbench(生成数据的时候按单表生成,多表有主键重复、不符合数据分片逻辑),是可以跑通的。

两个问题

  1. 5.1.0版本一致性校验问题:

错误日志(现在用的是5.1.0版本)上游只是一个mysql单库,全量导入到四个分片的proxy的时候是check没有问题的,但是增量同步的时候就check不一致了:
[INFO ] 2022-04-22 11:57:03.206 [ShardingSphere-Scaling-execute-5] o.a.s.d.p.s.r.RuleAlteredJobScheduler - onSuccess, all inventory tasks finished.
[INFO ] 2022-04-22 11:57:03.206 [ShardingSphere-Scaling-execute-5] o.a.s.d.p.s.r.RuleAlteredJobScheduler - -------------- Start incremental task --------------
[INFO ] 2022-04-22 11:57:03.208 [ShardingSphere-Scaling-execute-0] o.a.s.d.p.a.e.AbstractLifecycleExecutor - start lifecycle executor: org.apache.shardingsphere.data.pipeline.core.task.IncrementalTask@42c9ad44
[INFO ] 2022-04-22 11:57:03.209 [ShardingSphere-Scaling-execute-1] o.a.s.d.p.a.e.AbstractLifecycleExecutor - start lifecycle executor: org.apache.shardingsphere.data.pipeline.mysql.importer.MySQLImporter@4b5cf8b1
[INFO ] 2022-04-22 11:57:03.209 [ShardingSphere-Scaling-execute-1] o.a.s.d.p.c.i.AbstractImporter - importer write
[INFO ] 2022-04-22 11:57:03.209 [ShardingSphere-Scaling-execute-2] o.a.s.d.p.a.e.AbstractLifecycleExecutor - start lifecycle executor: org.apache.shardingsphere.data.pipeline.mysql.importer.MySQLImporter@5d89eb42
[INFO ] 2022-04-22 11:57:03.209 [ShardingSphere-Scaling-execute-2] o.a.s.d.p.c.i.AbstractImporter - importer write
[INFO ] 2022-04-22 11:57:03.210 [ShardingSphere-Scaling-execute-3] o.a.s.d.p.a.e.AbstractLifecycleExecutor - start lifecycle executor: org.apache.shardingsphere.data.pipeline.mysql.importer.MySQLImporter@69d82600
[INFO ] 2022-04-22 11:57:03.210 [ShardingSphere-Scaling-execute-3] o.a.s.d.p.c.i.AbstractImporter - importer write
[INFO ] 2022-04-22 11:57:03.215 [ShardingSphere-Scaling-execute-0] o.a.s.d.p.a.e.AbstractLifecycleExecutor - start lifecycle executor: org.apache.shardingsphere.data.pipeline.mysql.ingest.MySQLIncrementalDumper@458ba40e
[INFO ] 2022-04-22 11:57:03.215 [ShardingSphere-Scaling-execute-0] o.a.s.d.p.m.i.MySQLIncrementalDumper - incremental dump, jdbcUrl=jdbc:mysql://10.90.249.77:3306/sbtest?yearIsDateType=false&serverTimezone=UTC&useSSL=false
[INFO ] 2022-04-22 11:57:52.604 [ShardingSphere-Command-1] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - getProgress for job 94091363582736229
[ERROR] 2022-04-22 11:58:00.003 [_finished_check_Worker-1] org.quartz.core.JobRunShell - Job DEFAULT._finished_check threw an unhandled Exception:
java.lang.NullPointerException: null
[ERROR] 2022-04-22 11:58:00.003 [_finished_check_Worker-1] org.quartz.core.ErrorLogger - Job (DEFAULT._finished_check threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.lang.NullPointerException: null
[ERROR] 2022-04-22 11:59:00.003 [_finished_check_Worker-1] org.quartz.core.JobRunShell - Job DEFAULT._finished_check threw an unhandled Exception:
java.lang.NullPointerException: null
[ERROR] 2022-04-22 11:59:00.003 [_finished_check_Worker-1] org.quartz.core.ErrorLogger - Job (DEFAULT._finished_check threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.lang.NullPointerException: null
[INFO ] 2022-04-22 11:59:04.155 [ShardingSphere-Command-2] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Data consistency check for job 94091363582736229, algorithmType: CRC32_MATCH
[INFO ] 2022-04-22 11:59:10.938 [ShardingSphere-Command-2] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job 94091363582736229 with check algorithm ‘org.apache.shardingsphere.data.pipeline.core.spi.check.consistency.CRC32MatchDataConsistencyCheckAlgorithm’ data consistency checker result {sbtest1=DataConsistencyCheckResult(sourceRecordsCount=1000000, targetRecordsCount=1000000, recordsCountMatched=true, recordsContentMatched=true), sbtest2=DataConsistencyCheckResult(sourceRecordsCount=1000000, targetRecordsCount=1000000, recordsCountMatched=true, recordsContentMatched=true), sbtest3=DataConsistencyCheckResult(sourceRecordsCount=1000000, targetRecordsCount=1000000, recordsCountMatched=true, recordsContentMatched=true)}
[INFO ] 2022-04-22 11:59:10.938 [ShardingSphere-Command-2] o.a.s.d.p.c.a.i.GovernanceRepositoryAPIImpl - persist job check result ‘true’ for job 94091363582736229
[ERROR] 2022-04-22 12:00:00.003 [_finished_check_Worker-1] org.quartz.core.JobRunShell - Job DEFAULT._finished_check threw an unhandled Exception:
java.lang.NullPointerException: null
[ERROR] 2022-04-22 12:00:00.003 [_finished_check_Worker-1] org.quartz.core.ErrorLogger - Job (DEFAULT._finished_check threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.lang.NullPointerException: null
[INFO ] 2022-04-22 12:00:14.492 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - getProgress for job 94091363582736229
[INFO ] 2022-04-22 12:00:15.978 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - getProgress for job 94091363582736229
[INFO ] 2022-04-22 12:00:22.598 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Data consistency check for job 94091363582736229, algorithmType: CRC32_MATCH
[INFO ] 2022-04-22 12:00:23.415 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job 94091363582736229 with check algorithm ‘org.apache.shardingsphere.data.pipeline.core.spi.check.consistency.CRC32MatchDataConsistencyCheckAlgorithm’ data consistency checker result {sbtest1=DataConsistencyCheckResult(sourceRecordsCount=1000538, targetRecordsCount=1001204, recordsCountMatched=false, recordsContentMatched=false), sbtest2=DataConsistencyCheckResult(sourceRecordsCount=1000550, targetRecordsCount=1001276, recordsCountMatched=false, recordsContentMatched=false), sbtest3=DataConsistencyCheckResult(sourceRecordsCount=1000548, targetRecordsCount=1001261, recordsCountMatched=false, recordsContentMatched=false)}
[ERROR] 2022-04-22 12:00:23.416 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job: 94091363582736229, table: sbtest1 data consistency check failed, recordsContentMatched: false, recordsCountMatched: false
[INFO ] 2022-04-22 12:00:23.416 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.GovernanceRepositoryAPIImpl - persist job check result ‘false’ for job 94091363582736229
[INFO ] 2022-04-22 12:00:34.894 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Data consistency check for job 94091363582736229, algorithmType: CRC32_MATCH
[INFO ] 2022-04-22 12:00:36.073 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job 94091363582736229 with check algorithm ‘org.apache.shardingsphere.data.pipeline.core.spi.check.consistency.CRC32MatchDataConsistencyCheckAlgorithm’ data consistency checker result {sbtest1=DataConsistencyCheckResult(sourceRecordsCount=1000772, targetRecordsCount=1001885, recordsCountMatched=false, recordsContentMatched=false), sbtest2=DataConsistencyCheckResult(sourceRecordsCount=1000794, targetRecordsCount=1001960, recordsCountMatched=false, recordsContentMatched=false), sbtest3=DataConsistencyCheckResult(sourceRecordsCount=1000802, targetRecordsCount=1001978, recordsCountMatched=false, recordsContentMatched=false)}
[ERROR] 2022-04-22 12:00:36.073 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job: 94091363582736229, table: sbtest1 data consistency check failed, recordsContentMatched: false, recordsCountMatched: false
[INFO ] 2022-04-22 12:00:36.073 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.GovernanceRepositoryAPIImpl - persist job check result ‘false’ for job 94091363582736229
[INFO ] 2022-04-22 12:00:52.928 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Data consistency check for job 94091363582736229, algorithmType: CRC32_MATCH
[INFO ] 2022-04-22 12:00:53.660 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job 94091363582736229 with check algorithm ‘org.apache.shardingsphere.data.pipeline.core.spi.check.consistency.CRC32MatchDataConsistencyCheckAlgorithm’ data consistency checker result {sbtest1=DataConsistencyCheckResult(sourceRecordsCount=1001054, targetRecordsCount=1002633, recordsCountMatched=false, recordsContentMatched=false), sbtest2=DataConsistencyCheckResult(sourceRecordsCount=1001066, targetRecordsCount=1002637, recordsCountMatched=false, recordsContentMatched=false), sbtest3=DataConsistencyCheckResult(sourceRecordsCount=1001054, targetRecordsCount=1002659, recordsCountMatched=false, recordsContentMatched=false)}
[ERROR] 2022-04-22 12:00:53.660 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job: 94091363582736229, table: sbtest1 data consistency check failed, recordsContentMatched: false, recordsCountMatched: false
[INFO ] 2022-04-22 12:00:53.660 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.GovernanceRepositoryAPIImpl - persist job check result ‘false’ for job 94091363582736229
[INFO ] 2022-04-22 12:00:56.303 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Data consistency check for job 94091363582736229, algorithmType: CRC32_MATCH
[INFO ] 2022-04-22 12:00:56.978 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job 94091363582736229 with check algorithm ‘org.apache.shardingsphere.data.pipeline.core.spi.check.consistency.CRC32MatchDataConsistencyCheckAlgorithm’ data consistency checker result {sbtest1=DataConsistencyCheckResult(sourceRecordsCount=1001054, targetRecordsCount=1002633, recordsCountMatched=false, recordsContentMatched=false), sbtest2=DataConsistencyCheckResult(sourceRecordsCount=1001066, targetRecordsCount=1002637, recordsCountMatched=false, recordsContentMatched=false), sbtest3=DataConsistencyCheckResult(sourceRecordsCount=1001054, targetRecordsCount=1002659, recordsCountMatched=false, recordsContentMatched=false)}
[ERROR] 2022-04-22 12:00:56.978 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job: 94091363582736229, table: sbtest1 data consistency check failed, recordsContentMatched: false, recordsCountMatched: false
[INFO ] 2022-04-22 12:00:56.979 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.GovernanceRepositoryAPIImpl - persist job check result ‘false’ for job 94091363582736229
[ERROR] 2022-04-22 12:01:00.003 [_finished_check_Worker-1] org.quartz.core.JobRunShell - Job DEFAULT._finished_check threw an unhandled Exception:
java.lang.NullPointerException: null
[ERROR] 2022-04-22 12:01:00.003 [_finished_check_Worker-1] org.quartz.core.ErrorLogger - Job (DEFAULT._finished_check threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.lang.NullPointerException: null
[INFO ] 2022-04-22 12:01:26.551 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Data consistency check for job 94091363582736229, algorithmType: CRC32_MATCH
[INFO ] 2022-04-22 12:01:27.551 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job 94091363582736229 with check algorithm ‘org.apache.shardingsphere.data.pipeline.core.spi.check.consistency.CRC32MatchDataConsistencyCheckAlgorithm’ data consistency checker result {sbtest1=DataConsistencyCheckResult(sourceRecordsCount=1001054, targetRecordsCount=1002633, recordsCountMatched=false, recordsContentMatched=false), sbtest2=DataConsistencyCheckResult(sourceRecordsCount=1001066, targetRecordsCount=1002637, recordsCountMatched=false, recordsContentMatched=false), sbtest3=DataConsistencyCheckResult(sourceRecordsCount=1001054, targetRecordsCount=1002659, recordsCountMatched=false, recordsContentMatched=false)}
[ERROR] 2022-04-22 12:01:27.551 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job: 94091363582736229, table: sbtest1 data consistency check failed, recordsContentMatched: false, recordsCountMatched: false
[INFO ] 2022-04-22 12:01:27.551 [ShardingSphere-Command-3] o.a.s.d.p.c.a.i.GovernanceRepositoryAPIImpl - persist job check result ‘false’ for job 94091363582736229
[ERROR] 2022-04-22 12:02:00.003 [_finished_check_Worker-1] org.quartz.core.JobRunShell - Job DEFAULT._finished_check threw an unhandled Exception:
java.lang.NullPointerException: null
[ERROR] 2022-04-22 12:02:00.003 [_finished_check_Worker-1] org.quartz.core.ErrorLogger - Job (DEFAULT._finished_check threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.lang.NullPointerException: null

  1. 使用5.1.1版本时 在绑定表时报错(在5.1.0是没有报错的)

错误日志:
[ERROR] 2022-04-22 12:23:17.893 [ShardingSphere-Command-0] o.a.s.p.f.c.CommandExecutorTask - Exception occur:
java.lang.IllegalArgumentException: Invalid binding table configuration in ShardingRuleConfiguration.
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:142)
at org.apache.shardingsphere.sharding.rule.ShardingRule.(ShardingRule.java:125)
at org.apache.shardingsphere.sharding.rule.builder.ShardingRuleBuilder.build(ShardingRuleBuilder.java:41)
at org.apache.shardingsphere.sharding.rule.builder.ShardingRuleBuilder.build(ShardingRuleBuilder.java:35)
at org.apache.shardingsphere.infra.rule.builder.schema.SchemaRulesBuilder.buildRules(SchemaRulesBuilder.java:63)
at org.apache.shardingsphere.mode.metadata.MetaDataContextsBuilder.getSchemaRules(MetaDataContextsBuilder.java:105)
at org.apache.shardingsphere.mode.metadata.MetaDataContextsBuilder.addSchema(MetaDataContextsBuilder.java:83)
at org.apache.shardingsphere.mode.manager.ContextManager.buildChangedMetaDataContext(ContextManager.java:495)
at org.apache.shardingsphere.mode.manager.ContextManager.alterRuleConfiguration(ContextManager.java:267)
at org.apache.shardingsphere.proxy.backend.text.distsql.rdl.rule.RuleDefinitionBackendHandler.processSQLStatement(RuleDefinitionBackendHandler.java:121)
at org.apache.shardingsphere.proxy.backend.text.distsql.rdl.rule.RuleDefinitionBackendHandler.execute(RuleDefinitionBackendHandler.java:95)
at org.apache.shardingsphere.proxy.backend.text.distsql.rdl.rule.RuleDefinitionBackendHandler.execute(RuleDefinitionBackendHandler.java:60)
at org.apache.shardingsphere.proxy.backend.text.SchemaRequiredBackendHandler.execute(SchemaRequiredBackendHandler.java:51)
at org.apache.shardingsphere.proxy.frontend.mysql.command.query.text.query.MySQLComQueryPacketExecutor.execute(MySQLComQueryPacketExecutor.java:97)
at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.executeCommand(CommandExecutorTask.java:100)
at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.run(CommandExecutorTask.java:72)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[ERROR] 2022-04-22 12:25:20.077 [ShardingSphere-Command-1] o.a.s.p.f.c.CommandExecutorTask - Exception occur:
org.apache.shardingsphere.infra.distsql.exception.rule.DuplicateRuleException: Duplicate binding rule names [sbtest2, sbtest1, sbtest3] in schema sharding_db
at org.apache.shardingsphere.sharding.distsql.handler.update.CreateShardingBindingTableRuleStatementUpdater.checkToBeCreatedDuplicateBindingTables(CreateShardingBindingTableRuleStatementUpdater.java:77)
at org.apache.shardingsphere.sharding.distsql.handler.update.CreateShardingBindingTableRuleStatementUpdater.checkSQLStatement(CreateShardingBindingTableRuleStatementUpdater.java:48)
at org.apache.shardingsphere.sharding.distsql.handler.update.CreateShardingBindingTableRuleStatementUpdater.checkSQLStatement(CreateShardingBindingTableRuleStatementUpdater.java:40)
at org.apache.shardingsphere.proxy.backend.text.distsql.rdl.rule.RuleDefinitionBackendHandler.execute(RuleDefinitionBackendHandler.java:82)
at org.apache.shardingsphere.proxy.backend.text.distsql.rdl.rule.RuleDefinitionBackendHandler.execute(RuleDefinitionBackendHandler.java:60)
at org.apache.shardingsphere.proxy.backend.text.SchemaRequiredBackendHandler.execute(SchemaRequiredBackendHandler.java:51)
at org.apache.shardingsphere.proxy.frontend.mysql.command.query.text.query.MySQLComQueryPacketExecutor.execute(MySQLComQueryPacketExecutor.java:97)
at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.executeCommand(CommandExecutorTask.java:100)
at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.run(CommandExecutorTask.java:72)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[ERROR] 2022-04-22 12:25:44.031 [ShardingSphere-Command-1] o.a.s.p.f.c.CommandExecutorTask - Exception occur

1,错误日志和前面的对不上。

2,这几个表不需要建成绑定表。

建议使用业务真实表结构来测试,使用sysbench测试还是有些坑的,同时方便上线。

2,可是后面添加 ADD RESOURCE 也出错啊 5.1.0就没有问题

[ERROR] 2022-04-22 14:01:40.479 [ShardingSphere-Command-2] o.a.s.p.f.c.CommandExecutorTask - Exception occur:
org.apache.shardingsphere.infra.distsql.exception.resource.InvalidResourcesException: Can not process invalid resources, error messages are: [Invalid binding table configuration in ShardingRuleConfiguration.].
at org.apache.shardingsphere.proxy.backend.text.distsql.rdl.resource.AddResourceBackendHandler.execute(AddResourceBackendHandler.java:69)
at org.apache.shardingsphere.proxy.backend.text.distsql.rdl.resource.AddResourceBackendHandler.execute(AddResourceBackendHandler.java:45)
at org.apache.shardingsphere.proxy.backend.text.SchemaRequiredBackendHandler.execute(SchemaRequiredBackendHandler.java:51)
at org.apache.shardingsphere.proxy.frontend.mysql.command.query.text.query.MySQLComQueryPacketExecutor.execute(MySQLComQueryPacketExecutor.java:97)
at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.executeCommand(CommandExecutorTask.java:100)
at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.run(CommandExecutorTask.java:72)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[ERROR] 2022-04-22 14:01:47.806 [ShardingSphere-Command-2] o.a.s.p.f.c.CommandExecutorTask - Exception occur:
java.lang.IllegalArgumentException: Invalid binding table configuration in ShardingRuleConfiguration.
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:142)
at org.apache.shardingsphere.sharding.rule.ShardingRule.(ShardingRule.java:125)
at org.apache.shardingsphere.sharding.rule.builder.ShardingRuleBuilder.build(ShardingRuleBuilder.java:41)
at org.apache.shardingsphere.sharding.rule.builder.ShardingRuleBuilder.build(ShardingRuleBuilder.java:35)
at org.apache.shardingsphere.infra.rule.builder.schema.SchemaRulesBuilder.buildRules(SchemaRulesBuilder.java:63)
at org.apache.shardingsphere.mode.metadata.MetaDataContextsBuilder.getSchemaRules(MetaDataContextsBuilder.java:105)
at org.apache.shardingsphere.mode.metadata.MetaDataContextsBuilder.addSchema(MetaDataContextsBuilder.java:83)
at org.apache.shardingsphere.mode.manager.ContextManager.buildChangedMetaDataContext(ContextManager.java:495)
at org.apache.shardingsphere.mode.manager.ContextManager.alterRuleConfiguration(ContextManager.java:267)
at org.apache.shardingsphere.proxy.backend.text.distsql.rdl.rule.RuleDefinitionBackendHandler.processSQLStatement(RuleDefinitionBackendHandler.java:121)
at org.apache.shardingsphere.proxy.backend.text.distsql.rdl.rule.RuleDefinitionBackendHandler.execute(RuleDefinitionBackendHandler.java:95)
at org.apache.shardingsphere.proxy.backend.text.distsql.rdl.rule.RuleDefinitionBackendHandler.execute(RuleDefinitionBackendHandler.java:60)
at org.apache.shardingsphere.proxy.backend.text.SchemaRequiredBackendHandler.execute(SchemaRequiredBackendHandler.java:51)
at org.apache.shardingsphere.proxy.frontend.mysql.command.query.text.query.MySQLComQueryPacketExecutor.execute(MySQLComQueryPacketExecutor.java:97)
at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.executeCommand(CommandExecutorTask.java:100)
at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.run(CommandExecutorTask.java:72)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)

绑定表报错的问题你在GitHub单独提个issue吧,把复现步骤都加上。我联系负责DistSQL的同学看下。

增量的时候异常了麻烦看看:

[INFO ] 2022-04-25 11:41:27.504 [ShardingSphere-Scaling-execute-0] o.a.s.d.p.a.e.AbstractLifecycleExecutor - start lifecycle executor: org.apache.shardingsphere.data.pipeline.mysql.ingest.MySQLIncrementalDumper@716d60d4
[INFO ] 2022-04-25 11:41:27.504 [ShardingSphere-Scaling-execute-0] o.a.s.d.p.m.i.MySQLIncrementalDumper - incremental dump, jdbcUrl=jdbc:mysql://10.0.0.1:3306/testaudit?serverTimezone=UTC&yearIsDateType=false&useSSL=false
[ERROR] 2022-04-25 11:41:27.706 [ShardingSphere-Scaling-execute-4] o.a.s.d.p.core.task.IncrementalTask - importer onFailure, taskId=ds_0
java.lang.NullPointerException: null
at org.apache.shardingsphere.data.pipeline.core.record.RecordUtil.extractConditionColumns(RecordUtil.java:61)
at org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.executeUpdate(AbstractImporter.java:183)
at org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.executeUpdate(AbstractImporter.java:178)
at org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.doFlush(AbstractImporter.java:150)
at org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.tryFlush(AbstractImporter.java:132)
at org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.flushInternal(AbstractImporter.java:123)
at org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.lambda$flush$2(AbstractImporter.java:115)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.flush(AbstractImporter.java:112)
at org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.write(AbstractImporter.java:93)
at org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.doStart(AbstractImporter.java:79)
at org.apache.shardingsphere.data.pipeline.api.executor.AbstractLifecycleExecutor.start(AbstractLifecycleExecutor.java:41)
at org.apache.shardingsphere.data.pipeline.api.executor.AbstractLifecycleExecutor.run(AbstractLifecycleExecutor.java:61)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
[INFO ] 2022-04-25 11:41:27.706 [ShardingSphere-Scaling-execute-4] o.a.s.d.p.a.e.AbstractLifecycleExecutor - stop lifecycle executor: org.apache.shardingsphere.data.pipeline.core.task.IncrementalTask@594dc82
[INFO ] 2022-04-25 11:41:27.706 [ShardingSphere-Scaling-execute-4] o.a.s.d.p.a.e.AbstractLifecycleExecutor - stop lifecycle executor: org.apache.shardingsphere.data.pipeline.mysql.ingest.MySQLIncrementalDumper@716d60d4
[INFO ] 2022-04-25 11:41:27.706 [ShardingSphere-Scaling-execute-4] o.a.s.d.p.a.e.AbstractLifecycleExecutor - stop lifecycle executor: org.apache.shardingsphere.data.pipeline.mysql.importer.MySQLImporter@3e11911d
[INFO ] 2022-04-25 11:41:27.707 [ShardingSphere-Scaling-execute-4] o.a.s.d.p.a.e.AbstractLifecycleExecutor - stop lifecycle executor: org.apache.shardingsphere.data.pipeline.mysql.importer.MySQLImporter@70933d0
[INFO ] 2022-04-25 11:41:27.707 [ShardingSphere-Scaling-execute-4] o.a.s.d.p.a.e.AbstractLifecycleExecutor - stop lifecycle executor: org.apache.shardingsphere.data.pipeline.mysql.importer.MySQLImporter@3c0dd1b7
[INFO ] 2022-04-25 11:41:27.707 [ShardingSphere-Scaling-execute-0] o.a.s.d.p.m.i.MySQLIncrementalDumper - incremental dump, eventCount=8858
[ERROR] 2022-04-25 11:41:27.709 [ShardingSphere-Scaling-execute-4] o.a.s.d.p.s.r.RuleAlteredJobScheduler - Incremental task execute failed.
org.apache.shardingsphere.data.pipeline.core.exception.PipelineJobExecutionException: Task ds_0 execute failed
at org.apache.shardingsphere.data.pipeline.core.task.IncrementalTask.waitForResult(IncrementalTask.java:129)
at org.apache.shardingsphere.data.pipeline.core.task.IncrementalTask.doStart(IncrementalTask.java:86)
at org.apache.shardingsphere.data.pipeline.api.executor.AbstractLifecycleExecutor.start(AbstractLifecycleExecutor.java:41)
at org.apache.shardingsphere.data.pipeline.api.executor.AbstractLifecycleExecutor.run(AbstractLifecycleExecutor.java:61)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.NullPointerException: null
at org.apache.shardingsphere.data.pipeline.core.record.RecordUtil.extractConditionColumns(RecordUtil.java:61)
at org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.executeUpdate(AbstractImporter.java:183)
at org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.executeUpdate(AbstractImporter.java:178)

麻烦在GitHub提个issue吧,把对应的版本或者commit、复现步骤写下。

最好debug下,看看NPE具体是哪个变量导致的(extractConditionColumns(final DataRecord dataRecord, final Set shardingColumns)方法)。

京ICP备2021015875号