6. Redis操作+JOOQ 存储
紧接上一章节,本章节将来谈谈这个项目里面是如何使用 Redis 进行读取和写入操作的并且最终通过 JOOQ 将数据转存到 Mysql 中。
在 Vert.x 项目中 Redis 的使用相对比较简单,仅需一个 RedisUtil 类即可完成客户端配置,如下图:
public class RedisUtil {
// 连接池最大大小
private static final int MAX_POOL_SIZE = YamlUtil.getIntegerValue("redis.max-pool-size");
// 连接池最大等待
private static final int MAX_POOL_WAITING = YamlUtil.getIntegerValue("redis.max-pool-waiting");
// 连接池超时
private static final int POOL_RECYCLE_TIMEOUT = YamlUtil.getIntegerValue("redis.pool-recycle-timeout");
// 最大等待用户
private static final int MAX_WAITING_HANDLERS = YamlUtil.getIntegerValue("redis.max-waiting-handlers");
// 连接字符串
private static final String CONNECTION_STRING = YamlUtil.getStringValue("redis.connection-string");
private RedisUtil() {}
private static class SingletonInstance {
private static final RedisUtil INSTANCE = new RedisUtil();
}
public static RedisUtil getInstance() {
return SingletonInstance.INSTANCE;
}
/**
*
* @MethodName: getConfiguration
* @Description: redis配置
* @author yuanzhenhui
* @return RedisOptions
* @date 2023-04-13 04:32:48
*/
public RedisOptions getConfiguration() {
RedisOptions options = new RedisOptions();
options.setMaxPoolSize(MAX_POOL_SIZE);
options.setMaxPoolWaiting(MAX_POOL_WAITING);
options.setConnectionString(CONNECTION_STRING);
options.setPoolRecycleTimeout(POOL_RECYCLE_TIMEOUT);
options.setMaxWaitingHandlers(MAX_WAITING_HANDLERS);
return options;
}
}
RedisUtil 类只做配置信息整理,具体使用如下图所示:
public class SysUserBuriedRouter extends AbstractVerticle implements RouterSet {
private static final Logger LOGGER = LogManager.getLogger(SysUserBuriedRouter.class);
private static final SimpleDateFormat SIMPLE_DATE_FORMAT = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss");
@PropLoader(key = "server.context")
private static String context;
private static RedisAPI redis;
@Override
public void start() {
YamlUtil.propLoadSetter(this);
// -----------------
// 创建redis客户端连接
// -----------------
Redis.createClient(vertx, RedisUtil.getInstance().getConfiguration()).connect(onConnect -> {
if (onConnect.succeeded()) {
redis = RedisAPI.api(onConnect.result());
}
});
}
/**
*
* @MethodName: sendBuriedPointInfo
* @Description: restful处理类
* @author yuanzhenhui
* @param ctx
* void
* @date 2023-04-13 05:02:52
*/
public void sendBuriedPointInfo(RoutingContext ctx) {
String jsonStr = ctx.getBodyAsString();
if (!StringUtil.isNullOrEmpty(jsonStr)) {
SysUserBuried puav = ReflectUtil.convertJson2Pojo(jsonStr, SysUserBuried.class);
String uuid = UUID.randomUUID().toString();
puav.setId(uuid);
puav.setIp(IPUtil.getIpAddr(ctx));
puav.setAccessDate(SIMPLE_DATE_FORMAT.format(new Date()));
redis.setnx(uuid, puav.toJson().encode(), setnxResp -> {
if (setnxResp.failed()) {
LOGGER.error("func[SysUserBuriedRouter.sendBuriedPointInfo] Exception [{} - {}]",
new Object[] {setnxResp.cause(), setnxResp.result()});
}
});
}
HttpServerResponse hsr =
ctx.response().putHeader(CommonConstants.HTTP_CONTENT_TYPE, CommonConstants.HTTP_APPLICATION_JSON);
hsr.end(Json.encode(new RespMsg(1, "Message received")));
}
/**
*
* @MethodName: router4Restful
* @Description: 实现路由转发
* @author yuanzhenhui
* @param router
* @see io.kida.components.routers.RouterSet#router4Restful(io.vertx.ext.web.Router)
* @date 2023-04-13 05:03:12
*/
@Override
public void router4Restful(Router router) {
router.post(CommonConstants.HTTP_SLASH + context + CommonConstants.HTTP_SLASH + "sendBuriedPointInfo")
.handler(this::sendBuriedPointInfo);
}
}
还是以 SysUserBuriedRouter 为例,由于 Vert.x 有 Redis API 组件,因此这里可以在 start 方法中先初始化 Redis 客户端,如下图:
@Override
public void start() {
YamlUtil.propLoadSetter(this);
// -----------------
// 创建redis客户端连接
// -----------------
Redis.createClient(vertx, RedisUtil.getInstance().getConfiguration()).connect(onConnect -> {
if (onConnect.succeeded()) {
redis = RedisAPI.api(onConnect.result());
}
});
}
调用 Redis.createClient 方法,第一个参数传入 vertx 实例,第二个参数通过单例获取 redis 配置信息,之后调用 connect 并获取其 onConnect 回调。当 onConnect 为 succeeded 的时候,将 onConnect 的结果传入到 RedisAPI.api 方法中以完成 redis 客户端的创建。
由于 Vert.x 提供的 Redis 客户端都是异步处理的,因此所有 redis 方法都涉及到回调,如下图:
public void sendBuriedPointInfo(RoutingContext ctx) {
String jsonStr = ctx.getBodyAsString();
if (!StringUtil.isNullOrEmpty(jsonStr)) {
SysUserBuried puav = ReflectUtil.convertJson2Pojo(jsonStr, SysUserBuried.class);
String uuid = UUID.randomUUID().toString();
puav.setId(uuid);
puav.setIp(IPUtil.getIpAddr(ctx));
puav.setAccessDate(SIMPLE_DATE_FORMAT.format(new Date()));
redis.setnx(uuid, puav.toJson().encode(), setnxResp -> {
if (setnxResp.failed()) {
LOGGER.error("func[SysUserBuriedRouter.sendBuriedPointInfo] Exception [{} - {}]",
new Object[] {setnxResp.cause(), setnxResp.result()});
}
});
}
HttpServerResponse hsr =
ctx.response().putHeader(CommonConstants.HTTP_CONTENT_TYPE, CommonConstants.HTTP_APPLICATION_JSON);
hsr.end(Json.encode(new RespMsg(1, "Message received")));
}
不难看出,其实 Redis 客户端的调用方式跟我们使用 CLI 调用几乎如出一辙,这里就不再细说了。
因为这里只是一个 demo,所以处理的方式也是非常的简单。在sendBuriedPointInfo 方法中先对传入的 Json 字符串进行对象的转换,这里使用自己封装的方法 ReflectUtil.convertJson2Pojo 来完成,当然了你可以用 fastjson 等其他方式来完成。对象转换出来后再进行一次信息整理之后重新转换为 json 存放在 redis 里面。
以上就是将数据存放在 redis 的操作,那么如何将 redis 中的数据提取出来后通过 JOOQ 转存到 Mysql 了?在说业务代码之前,我们先看看 JOOQ 的配置。
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>io.kida</groupId>
<artifactId>buried-vtx-demo</artifactId>
<version>1.0.0</version>
<name>buried-vtx-demo</name>
<description>数据埋点vtx演示</description>
<properties>
...
<jooq.version>3.13.2</jooq.version>
<jooq.rx.version>4.2.0</jooq.rx.version>
<yaml.version>1.1.3</yaml.version>
<hikari.version>3.4.5</hikari.version>
...
</properties>
<dependencies>
...
<!-- HikariCP 数据库连接池 -->
<dependency>
<groupId>com.zaxxer</groupId>
<artifactId>HikariCP</artifactId>
<version>${hikari.version}</version>
</dependency>
<!-- jooq 依赖 -->
<dependency>
<groupId>org.jooq</groupId>
<artifactId>jooq</artifactId>
<version>${jooq.version}</version>
</dependency>
<dependency>
<groupId>io.github.jklingsporn</groupId>
<artifactId>vertx-jooq-completablefuture-jdbc</artifactId>
<version>${jooq.rx.version}</version>
</dependency>
</dependencies>
<build>
...
<!-- yaml 读取 maven 插件,可以让 pom 读取 yaml 文件配置 -->
<plugin>
<groupId>it.ozimov</groupId>
<artifactId>yaml-properties-maven-plugin</artifactId>
<version>${yaml.version}</version>
<executions>
<execution>
<phase>initialize</phase>
<goals>
<goal>read-project-properties</goal>
</goals>
<configuration>
<files>
<!-- yaml 文件配置地址 -->
<file>
src/main/resources/configs/master/application-datasource.yml</file>
</files>
</configuration>
</execution>
</executions>
</plugin>
<!-- jooq codegen maven 插件 -->
<plugin>
<groupId>org.jooq</groupId>
<artifactId>jooq-codegen-maven</artifactId>
<version>${jooq.version}</version>
<executions>
<execution>
<goals>
<!-- 发出 generate 指令 -->
<goal>generate</goal>
</goals>
</execution>
</executions>
<!-- 添加生成依赖包 -->
<dependencies>
<dependency>
<groupId>io.github.jklingsporn</groupId>
<artifactId>vertx-jooq-generate</artifactId>
<version>${jooq.rx.version}</version>
</dependency>
</dependencies>
<!-- 逆向工程配置信息 -->
<configuration>
<jdbc>
<driver>${datasource.driver-class-name}</driver>
<url>
${datasource.url-head}//${datasource.host}:${datasource.port}/${datasource.database-name}?useUnicode=${datasource.use-unicode}%26characterEncoding=${datasource.character-encoding}%26useSSL=${datasource.ssl-enable}%26serverTimezone=${datasource.server-timezone}</url>
<user>${datasource.username}</user>
<password>${datasource.password}</password>
</jdbc>
<generator>
<name>
io.github.jklingsporn.vertx.jooq.generate.completablefuture.CompletableFutureJDBCVertxGenerator
</name>
<database>
<name>${datasource.jooq-name}</name>
<includes>.*</includes>
<excludes>flyway_schema_history</excludes>
<inputSchema>${datasource.database-name}</inputSchema>
</database>
<generate>
<pojos>true</pojos>
<javaTimeTypes>true</javaTimeTypes>
<daos>true</daos>
<fluentSetters>true</fluentSetters>
</generate>
<target>
<packageName>${datasource.package-path}</packageName>
<directory>src/main/java</directory>
</target>
<strategy>
<name>
io.github.jklingsporn.vertx.jooq.generate.VertxGeneratorStrategy
</name>
</strategy>
</generator>
</configuration>
</plugin>
...
</plugins>
</build>
</project>
通过上面的 pom.xml 得知 JOOQ 是通过 HikariCP 连接池来跟 Mysql 进行交互的,同时我们也使用了 jooq-codegen-maven 对数据库进行逆向工程生成实体类代码。通过 mvn clean package 编译时会看到以下输出:
[INFO] Scanning for projects...
[INFO]
...
[INFO] ----------------------------------------------------------
[INFO] Thank you for using jOOQ and jOOQ's code generator
[INFO]
[INFO] Database parameters
[INFO] ----------------------------------------------------------
[INFO] dialect : MYSQL
[INFO] URL : jdbc:mysql://127.0.0.1:3506/tools?useUnicode=true%26characterEncoding=utf8%26useSSL=false%26serverTimezone=UTC
[INFO] target dir : /Users/yuanzhenhui/Documents/code_space/github/buried-vtx-demo/src/main/java
[INFO] target package : io.kida.model
[INFO] includes : [.*]
[INFO] excludes : [flyway_schema_history]
[INFO] includeExcludeColumns : false
[INFO] ----------------------------------------------------------
[INFO]
[INFO] JavaGenerator parameters
[INFO] ----------------------------------------------------------
[INFO] annotations (generated): false
[INFO] annotations (JPA: any) : false
[INFO] annotations (JPA: version):
[INFO] annotations (validation): false
[INFO] comments : true
[INFO] comments on attributes : true
[INFO] comments on catalogs : true
[INFO] comments on columns : true
[INFO] comments on keys : true
[INFO] comments on links : true
[INFO] comments on packages : true
[INFO] comments on parameters : true
[INFO] comments on queues : true
[INFO] comments on routines : true
[INFO] comments on schemas : true
[INFO] comments on sequences : true
[INFO] comments on tables : true
[INFO] comments on udts : true
[INFO] sources : true
[INFO] sources on views : true
[INFO] daos : true
[INFO] deprecated code : true
[INFO] global references (any): true
[INFO] global references (catalogs): true
[INFO] global references (keys): true
[INFO] global references (links): true
[INFO] global references (queues): true
[INFO] global references (routines): true
[INFO] global references (schemas): true
[INFO] global references (sequences): true
[INFO] global references (tables): true
[INFO] global references (udts): true
[INFO] indexes : true
[INFO] instance fields : true
[INFO] interfaces : false
[INFO] interfaces (immutable) : false
[INFO] javadoc : true
[INFO] keys : true
[INFO] links : true
[INFO] pojos : true
[INFO] pojos (immutable) : false
[INFO] queues : true
[INFO] records : true
[INFO] routines : true
[INFO] sequences : true
[INFO] sequenceFlags : true
[INFO] table-valued functions : true
[INFO] tables : true
[INFO] udts : true
[INFO] relations : true
[INFO] ----------------------------------------------------------
[INFO]
[INFO] Generation remarks
[INFO] ----------------------------------------------------------
[INFO]
[INFO] ----------------------------------------------------------
[INFO] Generating catalogs : Total: 1
[INFO]
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@ @@ @@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@
@@@@@@@@@@@@@@@@ @@ @@ @@@@@@@@@@
@@@@@@@@@@ @@@@ @@ @@ @@@@@@@@@@
@@@@@@@@@@ @@ @@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@ @@ @@@@@@@@@@
@@@@@@@@@@ @@ @@ @@@@ @@@@@@@@@@
@@@@@@@@@@ @@ @@ @@@@ @@@@@@@@@@
@@@@@@@@@@ @@ @ @ @@@@@@@@@@
@@@@@@@@@@ @@ @@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Thank you for using jOOQ 3.13.2
[INFO] ARRAYs fetched : 0 (0 included, 0 excluded)
[INFO] Enums fetched : 0 (0 included, 0 excluded)
[INFO] Packages fetched : 0 (0 included, 0 excluded)
[INFO] Routines fetched : 0 (0 included, 0 excluded)
[INFO] Sequences fetched : 0 (0 included, 0 excluded)
[INFO] Tables fetched : 15 (15 included, 0 excluded)
[INFO] No schema version is applied for catalog . Regenerating.
[INFO]
[INFO] Generating catalog : DefaultCatalog.java
[INFO] ==========================================================
[INFO] Generating schemata : Total: 1
[INFO] No schema version is applied for schema tools. Regenerating.
[INFO] Generating schema : Tools.java
[INFO] ----------------------------------------------------------
[INFO] UDTs fetched : 0 (0 included, 0 excluded)
[INFO] Generating tables
[INFO] Synthetic primary keys : 0 (0 included, 0 excluded)
[INFO] Overriding primary keys : 19 (0 included, 19 excluded)
[INFO] Generating table : CmmItembank.java [input=cmm_itembank, output=cmm_itembank, pk=KEY_cmm_itembank_PRIMARY]
[INFO] Embeddables fetched : 0 (0 included, 0 excluded)
[INFO] Indexes fetched : 16 (16 included, 0 excluded)
...
[INFO] Generating table : SysUserBuried.java [input=sys_user_buried, output=sys_user_buried, pk=KEY_sys_user_buried_PRIMARY]
...
[INFO] Tables generated : Total: 10.023s
[INFO] Generating table POJOs
...
[INFO] Generating POJO : SysUserBuried.java
...
[INFO] Table POJOs generated : Total: 10.169s, +146.329ms
[INFO] Generating DAOs
...
[INFO] Generating DAO : SysUserBuriedDao.java
...
[INFO] Table DAOs generated : Total: 10.215s, +45.81ms
[INFO] Generating table references
[INFO] Table refs generated : Total: 10.22s, +4.468ms
[INFO] Generating Keys
[INFO] Keys generated : Total: 10.231s, +11.205ms
[INFO] Generating Indexes
[INFO] Indexes generated : Total: 10.235s, +4.645ms
[INFO] Generating table records
...
[INFO] Generating record : SysUserBuriedRecord.java
...
[INFO] Table records generated : Total: 10.356s, +120.296ms
[INFO] Domains fetched : 0 (0 included, 0 excluded)
[INFO] Generation finished: tools: Total: 10.36s, +4.609ms
[INFO]
[INFO] Removing excess files
...
执行 maven 编译后可以得到自动生成文件,如下图:
.
|-- DefaultCatalog.java
|-- Indexes.java
|-- Keys.java
|-- Tables.java
|-- Tools.java
`-- tables
...
|-- SysUserBuried.java
|-- daos
...
| `-- SysUserBuriedDao.java
|-- pojos
...
| `-- SysUserBuried.java
`-- records
...
`-- SysUserBuriedRecord.java
之后我们就可以配置 HikariCP 数据源并加入到 JOOQ 配置里面,如下图:
public class JOOQUtil {
private static final String HOST = YamlUtil.getStringValue("datasource.host");
private static final String DATABASE = YamlUtil.getStringValue("datasource.database-name");
private static final String USE_UNICODE = YamlUtil.getStringValue("datasource.use-unicode");
private static final String CHARACTER_ENCODING = YamlUtil.getStringValue("datasource.character-encoding");
private static final String SERVER_TIMEZONE = YamlUtil.getStringValue("datasource.server-timezone");
private static final String URL_HEAD = YamlUtil.getStringValue("datasource.url-head");
private static final String DRIVER_CLASS_NAME = YamlUtil.getStringValue("datasource.driver-class-name");
private static final String PASSWORD = YamlUtil.getStringValue("datasource.password");
private static final String USERNAME = YamlUtil.getStringValue("datasource.username");
private static final String CONNECTION_TEST_QUERY =
YamlUtil.getStringValue("datasource.hikari.connection-test-query");
private static final String POOL_NAME = YamlUtil.getStringValue("datasource.hikari.pool-name");
private static final int MINIMUM_IDLE = YamlUtil.getIntegerValue("datasource.hikari.minimum-idle");
private static final int PORT = YamlUtil.getIntegerValue("datasource.port");
private static final int MAXIMUM_POOL_SIZE = YamlUtil.getIntegerValue("datasource.hikari.maximum-pool-size");
private static final int IDLE_TIMEOUT = YamlUtil.getIntegerValue("datasource.hikari.idle-timeout");
private static final int CONNECTION_TIMEOUT = YamlUtil.getIntegerValue("datasource.hikari.connection-timeout");
private static final int PREP_STMT_CACHE_SIZE = YamlUtil.getIntegerValue("datasource.hikari.prep-stmt-cache-size");
private static final int PREP_STMT_CACHE_SQL_LIMIT =
YamlUtil.getIntegerValue("datasource.hikari.prep-stmt-cache-sql-limit");
private static final boolean SSL = YamlUtil.getBooleanValue("datasource.ssl-enable");
private static final boolean IS_AUTO_COMMIT = YamlUtil.getBooleanValue("datasource.hikari.is-auto-commit");
private static final boolean ALLOW_POOL_SUSPENSION =
YamlUtil.getBooleanValue("datasource.hikari.allow-pool-suspension");
private static final boolean CACHE_PREP_STMTS = YamlUtil.getBooleanValue("datasource.hikari.cache-prep-stmts");
private static final boolean USE_SERVER_PREP_STMTS =
YamlUtil.getBooleanValue("datasource.hikari.use-server-prep-stmts");
private static final boolean USE_LOCAL_SESSION_STATE =
YamlUtil.getBooleanValue("datasource.hikari.use-local-session-state");
private static final boolean REWRITE_BATCHED_STATEMENTS =
YamlUtil.getBooleanValue("datasource.hikari.rewrite-batched-statements");
private static final boolean CACHE_RESULT_SET_METADATA =
YamlUtil.getBooleanValue("datasource.hikari.cache-result-set-metadata");
private static final boolean CACHE_SERVER_CONFIGURATION =
YamlUtil.getBooleanValue("datasource.hikari.cache-server-configuration");
private static final boolean ELIDE_SET_AUTO_COMMITS =
YamlUtil.getBooleanValue("datasource.hikari.elide-set-auto-commits");
private static final boolean MAINTAIN_TIME_STATS =
YamlUtil.getBooleanValue("datasource.hikari.maintain-time-stats");
private static final boolean ALLOW_PUBLIC_KEY_RETRIEVAL =
YamlUtil.getBooleanValue("datasource.hikari.allow-public-key-retrieval");
private static final String URL =
URL_HEAD + "//" + HOST + ":" + PORT + CommonConstants.HTTP_SLASH + DATABASE + "?useUnicode=" + USE_UNICODE
+ "&characterEncoding=" + CHARACTER_ENCODING + "&useSSL=" + SSL + "&serverTimezone=" + SERVER_TIMEZONE;
private JOOQUtil() {}
/**
*
* @MethodName: getConfiguration
* @Description: 获取配置实例并设置jooq当前对应数据库
* @author yuanzhenhui
* @return Configuration
* @date 2023-04-13 04:31:36
*/
public Configuration getConfiguration() {
Configuration configuration = new DefaultConfiguration();
configuration.set(SQLDialect.MYSQL);
configuration.set(getHikariCPDataProvider());
return configuration;
}
private static class SingletonInstance {
private static final JOOQUtil INSTANCE = new JOOQUtil();
private static final ConnectionProvider provider = new DataSourceConnectionProvider(getHikariCPDataSource());
}
public static JOOQUtil getInstance() {
return SingletonInstance.INSTANCE;
}
public static final ConnectionProvider getHikariCPDataProvider() {
return SingletonInstance.provider;
}
/**
*
* @MethodName: getHikariCPDataSource
* @Description: 数据库连接池配置信息
* @author yuanzhenhui
* @return DataSource
* @date 2023-04-13 04:31:48
*/
private static DataSource getHikariCPDataSource() {
HikariConfig hikariConfig = new HikariConfig();
hikariConfig.setJdbcUrl(URL);
hikariConfig.setDriverClassName(DRIVER_CLASS_NAME);
hikariConfig.setUsername(USERNAME);
hikariConfig.setPassword(PASSWORD);
hikariConfig.setAutoCommit(IS_AUTO_COMMIT);
hikariConfig.setAllowPoolSuspension(ALLOW_POOL_SUSPENSION);
hikariConfig.setConnectionTestQuery(CONNECTION_TEST_QUERY);
hikariConfig.setPoolName(POOL_NAME);
hikariConfig.setMinimumIdle(MINIMUM_IDLE);
hikariConfig.setMaximumPoolSize(MAXIMUM_POOL_SIZE);
hikariConfig.setIdleTimeout(IDLE_TIMEOUT);
hikariConfig.setConnectionTimeout(CONNECTION_TIMEOUT);
hikariConfig.addDataSourceProperty("cachePrepStmts", CACHE_PREP_STMTS);
hikariConfig.addDataSourceProperty("prepStmtCacheSize", PREP_STMT_CACHE_SIZE);
hikariConfig.addDataSourceProperty("prepStmtCacheSqlLimit", PREP_STMT_CACHE_SQL_LIMIT);
hikariConfig.addDataSourceProperty("useServerPrepStmts", USE_SERVER_PREP_STMTS);
hikariConfig.addDataSourceProperty("useLocalSessionState", USE_LOCAL_SESSION_STATE);
hikariConfig.addDataSourceProperty("useSsl", SSL);
hikariConfig.addDataSourceProperty("serverTimezone", SERVER_TIMEZONE);
hikariConfig.addDataSourceProperty("rewriteBatchedStatements", REWRITE_BATCHED_STATEMENTS);
hikariConfig.addDataSourceProperty("cacheResultSetMetadata", CACHE_RESULT_SET_METADATA);
hikariConfig.addDataSourceProperty("cacheServerConfiguration", CACHE_SERVER_CONFIGURATION);
hikariConfig.addDataSourceProperty("elideSetAutoCommits", ELIDE_SET_AUTO_COMMITS);
hikariConfig.addDataSourceProperty("maintainTimeStats", MAINTAIN_TIME_STATS);
hikariConfig.addDataSourceProperty("allowPublicKeyRetrieval", ALLOW_PUBLIC_KEY_RETRIEVAL);
return new HikariDataSource(hikariConfig);
}
}
至此,JOOQ 的所有配置已经完成了,那怎么使用呢?其实与 Redis 的也类似,我们通过一个实例进行说明,如下图:
public class SysUserBuriedService extends AbstractVerticle {
private static final Logger LOGGER = LogManager.getLogger(SysUserBuriedService.class);
// 每个批次执行行数
@PropLoader(key = "redis.batch-num")
private static int batchNum;
private static SysUserBuriedDao subDao;
private static RedisAPI redis;
private List<String> paramList;
private String blockNum = "0";
@Override
public void start() {
YamlUtil.propLoadSetter(this);
subDao = new SysUserBuriedDao(JOOQUtil.getInstance().getConfiguration(), vertx);
// -----------------
// 创建redis客户端连接
// -----------------
Redis.createClient(vertx, RedisUtil.getInstance().getConfiguration()).connect(onConnect -> {
if (onConnect.succeeded()) {
redis = RedisAPI.api(onConnect.result());
}
});
}
/**
*
* @MethodName: cron2SaveRedisAccess
* @Description: 定时保存redis数据
* @author yuanzhenhui void
* @date 2023-04-13 05:04:00
*/
public void cron2SaveRedisAccess() {
LOGGER.debug("func SysUserBuriedService.cron2SaveRedisAccess has begin!! ");
paramList = new ArrayList<>();
paramList.add(blockNum);
paramList.add("COUNT");
paramList.add(String.valueOf(batchNum));
redis.scan(paramList, this::redisScanToGet);
}
/**
*
* @MethodName: redisScanToGet
* @Description: redis做扫描并获取数据
* @author yuanzhenhui
* @param scanResp
* void
* @date 2023-04-13 05:04:16
*/
private void redisScanToGet(AsyncResult<Response> scanResp) {
if (scanResp.succeeded()) {
// ---------
// 获取块编码
// ---------
blockNum = scanResp.result().get(0).toString();
// ------------
// 获取key数据集
// ------------
Response keyArrResp = scanResp.result().get(1);
if (null != keyArrResp) {
paramList = keyArrResp.stream().map(Response::toString).collect(Collectors.toList());
// ---------------------
// 通过mget同时获取多个key
// ---------------------
redis.mget(paramList, this::databaseToInsert);
}
} else {
LOGGER.error("func[SysUserBuriedService.redisScanToGet] Exception [{} - {}]",
new Object[] {scanResp.cause(), scanResp.result()});
}
}
/**
*
* @MethodName: databaseToInsert
* @Description: 数据库进行插入
* @author yuanzhenhui
* @param mgetResp
* void
* @date 2022-08-10 06:21:09
*/
private void databaseToInsert(AsyncResult<Response> mgetResp) {
// ------------
// 异步获取成功后
// ------------
if (mgetResp.succeeded()) {
Response mgetArrResp = mgetResp.result();
if (null != mgetArrResp) {
List<SysUserBuried> puaList = mgetArrResp.stream()
.map(mget -> new SysUserBuried(new JsonObject(mget.toString()))).collect(Collectors.toList());
// --------------------------
// 将获取的集合进行批量插入数据库
// --------------------------
if (null != puaList && !puaList.isEmpty()) {
subDao.insert(puaList, true).whenCompleteAsync((opt, e) -> {
if (null == e) {
// ---------------------------------------------
// 这里采用的是 unlink 的方式将 redis 中的数据给清除掉
// ---------------------------------------------
redis.unlink(paramList, this::redisUnlinkDelete);
} else {
LOGGER.error(
"func[SysUserBuriedService.databaseToInsert mysql inserted] Exception [{} - {}] stackTrace[{}] ",
new Object[] {e.getCause(), e.getMessage(), Arrays.deepToString(e.getStackTrace())});
}
});
}
}
}
}
/**
*
* @MethodName: redisUnlinkDelete
* @Description: redis服务进行unlink删除
* @author yuanzhenhui
* @param unlinkResp
* void
* @date 2022-08-10 06:19:07
*/
private void redisUnlinkDelete(AsyncResult<Response> unlinkResp) {
if (unlinkResp.failed()) {
LOGGER.error("func[SysUserBuriedService.redisUnlinkDelete redis unlink key] Exception [{} - {}]",
new Object[] {unlinkResp.cause(), unlinkResp.result()});
} else {
LOGGER.debug("func SysUserBuriedService.redisUnlinkDelete has end!! ");
}
}
}
从上图得知,要使用 JOOQ 我们可以在 start 方法中对他进行注入得到对应的 dao 方法。如下图:
@Override
public void start() {
...
subDao = new SysUserBuriedDao(JOOQUtil.getInstance().getConfiguration(), vertx);
...
}
具体的做法跟 redis 很像这里就不展开说了。接下来就通过定时器去调用 cron2SaveRedisAccess 方法即可。还记得第一节的时候说过的定时器吗?
public static void main(String[] args) {
BootstrapConfig.setupAndDeploy(vtx -> {
LOGGER.info(" --------------- 定时器开始执行 --------------- ");
SysUserBuriedService userAccess = new SysUserBuriedService();
vtx.setPeriodic(CommonConstants.CRON, id -> userAccess.cron2SaveRedisAccess());
});
}
其实就是执行这里的 cron2SaveRedisAccess 方法。在 cron2SaveRedisAccess 方法中通过传入参数集合调用 redis 客户端拿到对应扫描后的数据集,如下图:
public void cron2SaveRedisAccess() {
LOGGER.debug("func SysUserBuriedService.cron2SaveRedisAccess has begin!! ");
paramList = new ArrayList<>();
paramList.add(blockNum);
paramList.add("COUNT");
paramList.add(String.valueOf(batchNum));
redis.scan(paramList, this::redisScanToGet);
}
scan 方法会存在回调处理,为了能够阅读清晰这里做了方法后的拆分,后续将交给 redisScanToGet 方法处理。如下图:
private void redisScanToGet(AsyncResult<Response> scanResp) {
if (scanResp.succeeded()) {
// ---------
// 获取块编码
// ---------
blockNum = scanResp.result().get(0).toString();
// ------------
// 获取key数据集
// ------------
Response keyArrResp = scanResp.result().get(1);
if (null != keyArrResp) {
paramList = keyArrResp.stream().map(Response::toString).collect(Collectors.toList());
// ---------------------
// 通过mget同时获取多个key
// ---------------------
redis.mget(paramList, this::databaseToInsert);
}
} else {
LOGGER.error("func[SysUserBuriedService.redisScanToGet] Exception [{} - {}]",
new Object[] {scanResp.cause(), scanResp.result()});
}
}
方法将根据回调传入结果进行判断,如果回调状态显示“成功”,那么通过 scanResp.result() 先获取块编码,然后在获取 key 的数据集 Response 。若 Response 数据集不为空,这遍历它并转换成一个 List 集合,以这个 List 集合作为传参使用 redis 的 mget 方法对 redis 中数据进行二次提取。同理,mget方法也有回调处理,这里将其进行拆解,拆解后代码将全部加入到 databaseToInsert 方法中。如下图:
private void databaseToInsert(AsyncResult<Response> mgetResp) {
// ------------
// 异步获取成功后
// ------------
if (mgetResp.succeeded()) {
Response mgetArrResp = mgetResp.result();
if (null != mgetArrResp) {
List<SysUserBuried> puaList = mgetArrResp.stream()
.map(mget -> new SysUserBuried(new JsonObject(mget.toString()))).collect(Collectors.toList());
// --------------------------
// 将获取的集合进行批量插入数据库
// --------------------------
if (null != puaList && !puaList.isEmpty()) {
subDao.insert(puaList, true).whenCompleteAsync((opt, e) -> {
if (null == e) {
// ---------------------------------------------
// 这里采用的是 unlink 的方式将 redis 中的数据给清除掉
// ---------------------------------------------
redis.unlink(paramList, this::redisUnlinkDelete);
} else {
LOGGER.error(
"func[SysUserBuriedService.databaseToInsert mysql inserted] Exception [{} - {}] stackTrace[{}] ",
new Object[] {e.getCause(), e.getMessage(), Arrays.deepToString(e.getStackTrace())});
}
});
}
}
}
}
同理,获取到 mget 的处理结果后先进行“成功”判断,若成功则对 Response 集合进行遍历并重新整理成 SysUserBuried 对象集合,之后就能够保存到数据库了。因为在 start 方法中已经对 JOOQ 的 dao 进行了注入,这里直接可以使用
subDao.insert(puaList, true).whenCompleteAsync((opt, e)
进行数据批量插入处理。由于JOOQ 是支持 CompletableFuture 做异步处理的,如下图:
...
<dependency>
<groupId>io.github.jklingsporn</groupId>
<artifactId>vertx-jooq-completablefuture-jdbc</artifactId>
<version>${jooq.rx.version}</version>
</dependency>
...
所以可以通过 whenCompleteAsync 进行回调。在插入成功后在进行 redis 的 unlink 处理就完成了。至于后面的 redisUnlinkDelete 方法其实就是做回调输出而已,并没有什么实质性的操作。
至此,JOOQ 对数据库操作和 redis 操作都已经全部演示完毕,本次简单的 demo 也已经实现数据闭环,更多Vert.x的内容等之后有空再归纳出来吧(虽然已经成型了)。