方便更快捷的说明问题,可以按需填写(可删除)
使用环境:
springboot 2.1.4-RELEASE
java 1.8
mybatis 3.5.5
druid 1.1.24
druid-spring-boot-starter 1.1.24
shardingsphere-jdbc-core 5.4.1
场景、问题:
想要将现有的项目切换到支持分表的场景,目前启动的druid无法正常使用,sharding-jdbc的每次启动都会使用JDBCRepository类的init启动Hikari连接池
已进行操作:
在切换之前,已经使用springboot的容器方式实现代码方式进行数据库连接配置,使用component方式为每一个数据源(有多个数据源)分别注册了相应的bean,包括DruidDataSource SqlSessionFactory DataSourceTransactionManager SqlSessionTemplate,新增了一个配置类,实现基于现有的datasource来启动shardingjdbc相关的bean注入,但是不成功
@Configuration
public class ShardingConfig {
@Autowired
private OAODataSourceProperties oaoDataSourceProperties;
@Bean
public CustomizedSnowFlakeKeyGenerator idWorkKeyGenerator() {
return new CustomizedSnowFlakeKeyGenerator();
}
@Bean
public OrderTimeKeyShardingAlgorithm columnShardingAlgorithm() {
return new OrderTimeKeyShardingAlgorithm();
}
public ShardingTableRuleConfiguration tableRuleConfiguration(CustomizedSnowFlakeKeyGenerator keyGenerator){
ShardingTableRuleConfiguration tableRuleConfiguration = new ShardingTableRuleConfiguration("order","risk_m.order_${0..1}");
tableRuleConfiguration.setTableShardingStrategy(new StandardShardingStrategyConfiguration("order_time", OrderTimeKeyShardingAlgorithm.class.getName()));
tableRuleConfiguration.setKeyGenerateStrategy(new KeyGenerateStrategyConfiguration("id", keyGenerator.getType()));
ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
ComplexShardingStrategyConfiguration complexShardingStrategyConfiguration =new ComplexShardingStrategyConfiguration("order_time", OrderTimeKeyShardingAlgorithm.class.getName());
shardingRuleConfig.setDefaultTableShardingStrategy(complexShardingStrategyConfiguration);
shardingRuleConfig.getTables().add(tableRuleConfiguration);
return tableRuleConfiguration;
}
@Bean
public DataSource shardingDataSource(CustomizedSnowFlakeKeyGenerator keyGenerator) throws SQLException {
// 配置分片键
Properties properties = new Properties();
properties.setProperty("sql.show", "true");
ShardingRuleConfiguration shardingRuleConfiguration=new ShardingRuleConfiguration();
shardingRuleConfiguration.getTables().add(tableRuleConfiguration(keyGenerator));
Map<String, DataSource> dataSourceMap = new HashMap<>();
DruidDataSource dataSource =oaoDataSourceProperties.getDataSource();
dataSourceMap.put(OAODataSourceProperties.dataSourceName, dataSource);
DataSource shardingDataSource = ShardingSphereDataSourceFactory.createDataSource(dataSourceMap, Arrays.asList(shardingRuleConfiguration), properties);
org.apache.shardingsphere.infra.datasource.pool.creator.DataSourcePoolCreator creator;
return shardingDataSource;
}
}
现状:
通过日志发现项目先启动的hikari连接池,后又启动了druid连接池
2024-06-27 09:28:37.752|INFO||||44612|main|com.zaxxer.hikari.HikariDataSource |HikariPool-1 - Starting…
2024-06-27 09:28:38.095|INFO||||44612|main|com.zaxxer.hikari.HikariDataSource |HikariPool-1 - Start completed.
2024-06-27 09:28:39.225|INFO||||44612|main|com.alibaba.druid.pool.DruidDataSource |{dataSource-1} inited
通过代码跟踪,发现hikari是在JDBCRepository中的init方法启动的,这个方法强制使用hikari连接池