Sharding-JDBC 分库分表 + 读写分离如何配置? 如何使用? 不知道大家有没有跟我一样的疑惑, 我带着疑惑自己动手实验了一把.
环境:
- SpringBoot + Mybatis + MySQL
- ShardingSphere 5.1.0
SQL 脚本:
- 创建四个数据库 :
ds0
、ds1
、ds2
、ds3
, 这里我们假设ds0
是主库,ds2
是ds0
的从库,ds1
也是主库,ds3
是ds1
的从库. 最后的关系如下:
主库 | 从库 |
---|---|
ds0 |
ds2 |
ds1 |
ds3 |
- 在四个数据库中建立表:
-
ds0
→t_user0
-
ds1
→t_user1
-
ds2
→t_user0
-
ds3
→t_user1
提醒: 我们是假设
ds0
和ds2
、ds1
和ds3
是有主从关系的. 所以我们在创建表的时候,ds0
和ds2
、ds1
和ds3
的表当然是一样的啦!
ds0
和 ds2
数据库中创建 t_user0
表:
CREATE TABLE `t_user0` (
`id` bigint NOT NULL AUTO_INCREMENT,
`name` varchar(50) DEFAULT NULL,
`sharding_key` int DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
ds1
和 ds3
数据库中创建 t_user1
表:
CREATE TABLE `t_user1` (
`id` bigint NOT NULL AUTO_INCREMENT,
`name` varchar(50) DEFAULT NULL,
`sharding_key` int DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
- 插入数据, 我们给
t_user0
和t_user1
中分别插入一条数据.
ds0
中 t_user0
: INSERT INTO t_user0
(name, sharding_key) values (master-a, 2)
ds1
中 t_user1
: INSERT INTO t_user1
(name, sharding_key) values (master-b, 1)
ds2
中 t_user0
: INSERT INTO t_user0
(name, sharding_key) values (master-a-slave, 2)
ds3
中 t_user1
: INSERT INTO t_user1
(name, sharding_key) values (master-b-slave,1)
不要疑惑为什么插入的数据 name 值为什么不一样, 因为我们这里是假设主从的关系, 正常业务中不会出现这个问题.
到这里, 我们前期的准备工作就做完了!
Springboot application.yml 的配置信息
## mybatis的配置
mybatis:
mapper-locations: classpath*:mapper/**/*Mapper.xml
type-aliases-package: com.shardingsphere
spring:
shardingsphere:
props:
sql-show: true
datasource:
master-a:
username: root
password: 123456
jdbc-url: jdbc:mysql://127.0.0.1:3306/ds0
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
master-b:
username: root
password: 123456
jdbc-url: jdbc:mysql://127.0.0.1:3306/ds1
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
master-a-slave:
username: root
password: 123456
jdbc-url: jdbc:mysql://127.0.0.1:3306/ds2
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
master-b-slave:
username: root
password: 123456
jdbc-url: jdbc:mysql://127.0.0.1:3306/ds3
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
names: master-a, master-b, master-a-slave, master-b-slave
rules:
sharding:
sharding-algorithms:
# 分表算法
table-inline:
type: INLINE
props:
algorithm-expression: t_user$->{sharding_key % 2}
# 分库算法
database-inline:
type: INLINE
props:
algorithm-expression: ds$->{sharding_key % 2}
tables:
t_user:
actual-data-nodes: ds$->{0..1}.t_user$->{0..1}
key-generate-strategy:
column: id
key-generator-name: snowflake
# 分表策略
table-strategy:
standard:
sharding-column: sharding_key
sharding-algorithm-name: table-inline
# 分库策略
database-strategy:
standard:
sharding-column: sharding_key
sharding-algorithm-name: database-inline
key-generators:
snowflake:
type: SNOWFLAKE
props:
worker-id: 123
readwrite-splitting:
data-sources:
ds0:
type: Static
props:
write-data-source-name: master-a
read-data-source-names: master-a-slave
load-balancer-name: random
ds1:
type: Static
props:
write-data-source-name: master-b
read-data-source-names: master-b-slave
load-balancer-name: random
load-balancers:
random:
type: RANDOM
Mybatis xml :
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
<mapper namespace="com.shardingsphere.readwrite.mapper.UserMapper">
<resultMap id="userMap" type="com.shardingsphere.readwrite.entity.User">
<id column="id" jdbcType="BIGINT" property="id"/>
<result column="name" jdbcType="VARCHAR" property="name"/>
<result column="sharding_key" jdbcType="INTEGER" property="shardingKey"/>
</resultMap>
<select id = "select" parameterType="com.shardingsphere.readwrite.entity.User" resultMap="userMap">
select id, name, sharding_key from t_user where sharding_key = #{shardingKey} order by id desc limit 1
</select>
<insert id="insert" parameterType="com.shardingsphere.readwrite.entity.User" keyProperty="id" keyColumn="id" useGeneratedKeys="true">
insert into t_user(name, sharding_key) values (#{name}, #{shardingKey})
</insert>
</mapper>
UserMapper
@Mapper
@Repository
public interface UserMapper {
User select(User user);
long insert(User user);
}
User entity
private Long id;
private String name;
private Integer shardingKey;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Integer getShardingKey() {
return shardingKey;
}
public void setShardingKey(Integer shardingKey) {
this.shardingKey = shardingKey;
}
读测试代码, 验证这个配置是否生效
@Test
public void readTest() {
User selectUser = new User();
selectUser.setShardingKey(1);
User user = userMapper.select(selectUser);
System.out.println();
System.out.println();
System.out.println("name is :" + user.getName() + " , " + "sharding_key is :" + user.getShardingKey());
System.out.println();
System.out.println();
}
预期结果: 我们的配置中是根据 sharding_key 字段来进行分片的, 我们上面的例子 sharding_key 设置为 1, 所以 Sharding-JDBC 会给我们路由到 ds1 所在的集群下, 又因为我们当前是测试读取数据, 所以根据会通过我们配置的
read-data-source-names
来进行读取. 如果查询如来的结果如下, 则代表我们成功.
ShardingSphere-SQL : Actual SQL: master-b-slave ::: select id, name, sharding_key from t_user1 where sharding_key = ? order by id desc limit 1 ::: [1]
name is :master-b-slave , sharding_key is :1
写测试代码, 验证这个配置是否生效
@Test
public void writeTest() {
User user = new User();
user.setName("master-a");
user.setShardingKey(2);
userMapper.insert(user);
}
预期结果
ShardingSphere-SQL : Actual SQL: master-a ::: insert into t_user0(name, sharding_key, id) values (?, ?, ?) ::: [master-a, 2, 680740116461236224]