方便更快捷的说明问题,可以按需填写(可删除)
使用环境:
系统:macos 12.0.1
mysql集群:docker部署3台mysql:5.7
shardingproxy:5.0.0版本,单节点部署,使用zk作为配置中心
benchmarksql:5.0版本
场景、问题:
benchmarksql压测一会儿,tpmTOTAL不再变化,卡死在截图中的界面
方便更快捷的说明问题,可以按需填写(可删除)
系统:macos 12.0.1
mysql集群:docker部署3台mysql:5.7
shardingproxy:5.0.0版本,单节点部署,使用zk作为配置中心
benchmarksql:5.0版本
你说的这个问题,我其实也在测试 MySQL、PG 的过程中遇到过。我建议你可以修改下你的 TPCC 的配置,这里:
// depends on your machine
terminals=10
//To run specified transactions per terminal- runMins must equal zero
runTxnsPerTerminal=0
//To run for specified minutes- runTxnsPerTerminal must equal zero
runMins=5
//Number of total transactions per minute
limitTxnsPerMin=0
20:04:53,488 [Thread-5] ERROR jTPCCTData : Unexpected SQLException in NEW_ORDERUsage: 32MB / 383MB
20:04:53,488 [Thread-5] ERROR jTPCCTData : Duplicate entry ‘1-2-3508’ for key ‘PRIMARY’
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry ‘1-2-3508’ for key ‘PRIMARY’
at sun.reflect.GeneratedConstructorAccessor7.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.Util.getInstance(Util.java:408)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:936)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3978)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3914)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2683)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2495)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
at com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2124)
at com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2058)
at com.mysql.jdbc.PreparedStatement.executeLargeUpdate(PreparedStatement.java:5158)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2043)
at jTPCCTData.executeNewOrder(jTPCCTData.java:410)
at jTPCCTData.execute(jTPCCTData.java:95)
at jTPCCTerminal.executeTransactions(jTPCCTerminal.java:261)
at jTPCCTerminal.run(jTPCCTerminal.java:88)
at java.lang.Thread.run(Thread.java:748)
Term-00, Running Average tpmTOTAL: 5670.69 Current tpmTOTAL: 75648 Memory Usage: 14MB / 383MB
在跑一段时间之后还是会卡住,benchmarksql参数如下
db=mysql
driver=com.mysql.jdbc.Driver
conn=jdbc:mysql://127.0.0.1:3307/TESTDB?useSSL=false
user=root
password=root
warehouses=1
loadWorkers=4
terminals=10
//To run specified transactions per terminal- runMins must equal zero
runTxnsPerTerminal=0
//To run for specified minutes- runTxnsPerTerminal must equal zero
runMins=2
//Number of total transactions per minute
limitTxnsPerMin=0
//Set to true to run in 4.x compatible mode. Set to false to use the
//entire configured database evenly.
terminalWarehouseFixed=true
//The following five values must add up to 100
//The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
newOrderWeight=45
paymentWeight=43
orderStatusWeight=4
deliveryWeight=4
stockLevelWeight=4
// Directory name to create for collecting detailed result data.
// Comment this out to suppress.
resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
osCollectorScript=./misc/os_collector_linux.py
osCollectorInterval=1
//osCollectorSSHAddr=user@dbhost
osCollectorDevices=net_eth0 blk_sda
是不是还有其他需要修改的参数?
作为测试,benchmarksql 的参数问题不大。这里你需要修改的,是 proxy 的规则。BenchmarkSQL 在压测过程中执行的相应 SQL,才是你该去理解的
我这有一份之前测试过的配置,你可以根据自己的情况修改一下:
schemaName: tpcc
dataSources:
ds_0:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
jdbcUrl: jdbc:mysql://127.0.0.1:13306/tpcc
username: root
password: root
connectionTimeoutMilliseconds: 10000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 256
minPoolSize: 1
ds_1:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
jdbcUrl: jdbc:mysql://127.0.0.1:13307/tpcc
username: root
password: root
connectionTimeoutMilliseconds: 10000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 256
minPoolSize: 1
ds_2:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
jdbcUrl: jdbc:mysql://127.0.0.1:13308/tpcc
username: root
password: root
connectionTimeoutMilliseconds: 10000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 256
minPoolSize: 1
ds_3:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
jdbcUrl: jdbc:mysql://127.0.0.1:13309/tpcc
username: root
password: root
connectionTimeoutMilliseconds: 10000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 256
minPoolSize: 1
ds_4:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
jdbcUrl: jdbc:mysql://127.0.0.1:13310/tpcc
username: root
password: root
connectionTimeoutMilliseconds: 10000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 256
minPoolSize: 1
rules:
- !SHARDING
bindingTables:
- bmsql_warehouse, bmsql_customer
- bmsql_stock, bmsql_district, bmsql_order_line
defaultDatabaseStrategy:
none:
defaultTableStrategy:
none:
keyGenerators:
snowflake:
props:
worker-id: 123
type: SNOWFLAKE
tables:
bmsql_config:
actualDataNodes: ds_0.bmsql_config
bmsql_warehouse:
actualDataNodes: ds_${0..4}.bmsql_warehouse
databaseStrategy:
standard:
shardingColumn: w_id
shardingAlgorithmName: bmsql_warehouse_database_inline
bmsql_district:
actualDataNodes: ds_${0..4}.bmsql_district
databaseStrategy:
standard:
shardingColumn: d_w_id
shardingAlgorithmName: bmsql_district_database_inline
bmsql_customer:
actualDataNodes: ds_${0..4}.bmsql_customer
databaseStrategy:
standard:
shardingColumn: c_w_id
shardingAlgorithmName: bmsql_customer_database_inline
bmsql_item:
actualDataNodes: ds_${0..4}.bmsql_item
databaseStrategy:
standard:
shardingColumn: i_id
shardingAlgorithmName: bmsql_item_database_inline
bmsql_history:
actualDataNodes: ds_${0..4}.bmsql_history
databaseStrategy:
standard:
shardingColumn: h_w_id
shardingAlgorithmName: bmsql_history_database_inline
bmsql_oorder:
actualDataNodes: ds_${0..4}.bmsql_oorder
databaseStrategy:
standard:
shardingColumn: o_w_id
shardingAlgorithmName: bmsql_oorder_database_inline
bmsql_stock:
actualDataNodes: ds_${0..4}.bmsql_stock
databaseStrategy:
standard:
shardingColumn: s_w_id
shardingAlgorithmName: bmsql_stock_database_inline
bmsql_new_order:
actualDataNodes: ds_${0..4}.bmsql_new_order
databaseStrategy:
standard:
shardingColumn: no_w_id
shardingAlgorithmName: bmsql_new_order_database_inline
bmsql_order_line:
actualDataNodes: ds_${0..4}.bmsql_order_line
databaseStrategy:
standard:
shardingColumn: ol_w_id
shardingAlgorithmName: bmsql_order_line_database_inline
shardingAlgorithms:
bmsql_warehouse_database_inline:
type: INLINE
props:
algorithm-expression: ds_${w_id % 5}
bmsql_district_database_inline:
type: INLINE
props:
algorithm-expression: ds_${d_w_id % 5}
bmsql_customer_database_inline:
type: INLINE
props:
algorithm-expression: ds_${c_w_id % 5}
bmsql_item_database_inline:
type: INLINE
props:
algorithm-expression: ds_${i_id % 5}
bmsql_history_database_inline:
type: INLINE
props:
algorithm-expression: ds_${h_w_id % 5}
bmsql_oorder_database_inline:
type: INLINE
props:
algorithm-expression: ds_${o_w_id % 5}
bmsql_stock_database_inline:
type: INLINE
props:
algorithm-expression: ds_${s_w_id % 5}
bmsql_new_order_database_inline:
type: INLINE
props:
algorithm-expression: ds_${no_w_id % 5}
bmsql_order_line_database_inline:
type: INLINE
props:
algorithm-expression: ds_${ol_w_id % 5}
我理解也和benchmarksql的配置关系不大,但是我这边后续暂时只打算用proxy用来做读写分离还有集群探活从而做双主单写,所以我出问题时proxy的配置是这样的
hostM1:
minimumIdle: 1
password: guanghua
minPoolSize: 1
idleTimeout: 60000
jdbcUrl: jdbc:mysql://localhost:13306/proxytest
readOnly: false
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
maximumPoolSize: 50
connectionTimeout: 30000
maxPoolSize: 50
maxLifetime: 1800000
username: root
hostM2:
minimumIdle: 1
password: guanghua
minPoolSize: 1
idleTimeout: 60000
jdbcUrl: jdbc:mysql://localhost:13307/proxytest
readOnly: false
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
maximumPoolSize: 50
connectionTimeout: 30000
maxPoolSize: 50
maxLifetime: 1800000
username: root
hostS1:
minimumIdle: 1
password: guanghua
minPoolSize: 1
idleTimeout: 60000
jdbcUrl: jdbc:mysql://localhost:13308/proxytest
readOnly: false
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
maximumPoolSize: 50
connectionTimeout: 30000
maxPoolSize: 50
maxLifetime: 1800000
username: root
- !READWRITE_SPLITTING
dataSources:
group_0:
loadBalancerName: group_0_ROUND_ROBIN
readDataSourceNames:
- hostM1
- hostS1
writeDataSourceName: hostM2
loadBalancers:
group_0_ROUND_ROBIN:
type: ROUND_ROBIN
这个问题,分片错误,或者没有清理数据的时候会遇到,你先试试
./runDatabaseDestroy.sh
./runDatabaseBuild.sh
./runBenchmark.sh YOUR_PROPERTY
嗯,这个已经有试过的,问题是在清理数据之后出现的
了解,我这边也测一下,回头我们一起对一下结果。
对了,你用的 BenchmarkSQL 是哪个版本的?目前能找到三个版本的 BenchmarkSQL,我记得我在群里发的是这个:
是这个,而且是最新的5.0版本
请问这个问题查出原因了么?
稍后会有同事更新
好的,这里的更新指的是什么,回复这个问题还是指的shardingproxy有bug需要修复的更新版本?
会有具体的同事去测试这个情况,目前不能确定,而且这看起来大概率不是 bug。目前测试那边的同事在忙于发版。
好的感谢,辛苦了
请问下有进展么,这边要对多个代理端shardingproxy、mycat等进行选型,现在卡在这步压测
根据您提供的配置文件,我们仅仅修改了其中的数据库 ip 跟 password
运行情况如下:
您在 benchmarksql 中使用的地址为 mysql://127.0.0.1:3307/TESTDB
,但是在 proxy 的配置中并未看到 TESTDB 的相关设置,是否配置文件未贴全,hostM2 跟 hostM1 是否开启了主从同步
未开启主从同步,TESTDB的配置在conf/config-readwrite-splitting.yaml中
schemaName: TESTDB
dataSources:
hostM1:
url: jdbc:mysql://localhost:13306/proxytest?serverTimezone=UTC&useSSL=false
username: root
password: guanghua
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
hostM2:
url: jdbc:mysql://localhost:13307/proxytest?serverTimezone=UTC&useSSL=false
username: root
password: guanghua
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
hostS1:
url: jdbc:mysql://localhost:13308/proxytest?serverTimezone=UTC&useSSL=false
username: root
password: guanghua
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
rules:
- !READWRITE_SPLITTING
dataSources:
ms_ds:
writeDataSourceName: hostM1
readDataSourceNames:
- hostM2
- hostS1
读写分离是需要以「数据库开启主从同步」为基础的。否则这个配置中的从库的数据永远都是空的,遇到读取数据的时候是一定会报错的。
好的,这块是指shardingproxy的读写分离是依赖数据库开启主从同步的是么,因为在使用其他代理中间件的时候没有发现这个问题。
是的,依赖数据库开启主从同步。那其他中间件是怎么处理数据同步的呢?这方面你有什么经验可以与大家分享下的吗