MySQL8.0新特性之增强版逻辑备份恢复_mysql

前言

关于MySQL库表逻辑备份恢复,我们主要有以下几种常用的工具:


1、mysqldump:MySQL原生自带的逻辑备份恢复工具,支持整个实例、单个数据库、单张表等的备份与恢复,对于1-10个G的数据导出还是很好用的,但由于是单线程工具,备份恢复的速度比较慢。


2、mysqlpump:MySQL 5.7推出的逻辑备份恢复工具,可以说是是mysqldump增强版,支持多线程导出,但由于导入仍然还是单线程执行,速度也比较慢。


3、mydumper/myloader:开源的一个逻辑备份恢复工具,支持多线程导出导入,速度比较快,但因为不是官方的工具,一般生产环境使用比较少


在MySQL 8.0版本中,推出了MySQL Shell Utilities,其中就包含了最新的逻辑备份恢复工具,可以支持多线程数据的导入导出,相对于mysqldump/mysqlpump等工具,很好的解决了数据导入和导出速度慢的问题


MySQL Shell Utilities

MySQL Shell Utilities是MySQL 8.0官方推出的管理工具集合,包括Upgrade Checker Utility、JSON Import Utility、Table Export Utility、Parallel Table Import Utility、Instance Dump Utility、Schema Dump Utility、Table Dump Utility、Dump Loading Utility等,可以支持整个实例、单个数据库、单张表的逻辑备份与恢复;  


首先从官方网站下载mysql shell软件,地址如下:

https://downloads.mysql.com/archives/get/p/43/file/mysql-shell-8.0.33-1.el7.x86_64.rpm


在Linux服务下安装软件:

yum localinstall mysql-shell-8.0.33-1.el7.x86_64.rpm -y  


安装完软件之后,通过mysqlsh登录本在的mysql实例:

[root@node223 mysql_install_pkg]# mysqlsh root@localhost:3306

输入密码后就可以看到以下提示符:

MySQL  localhost:3306 ssl  JS >  


新创建一个测试用户:

mysql> create user shukuinfo@'%' identified with mysql_native_password by 'Shukuinfo123.';

Query OK, 0 rows affected (0.08 sec)


mysql> grant all privileges on *.* to shukuinfo@'%';

Query OK, 0 rows affected (0.04 sec)


使用以下命令在目标库生成待测试的数据:

[root@node234 ~]#  sysbench /usr/share/sysbench/oltp_read_write.lua  --mysql-db=sbtest --mysql-user=shukuinfo --mysql-password=Rscpass123. --mysql-host=172.16.1.223 --mysql-port=3306  --table_size=100000  --tables=1 --threads=5 --events=50 --report-interval=1 --time=100 prepare  

创建1张测试表,并插入100000条记录


下面介绍常用工具的使用方法:

Table Export Utility

Table Export Utility,支持单张表的多线程导出

注意,如果要使用以下工具,MySQL Shell需要切换到js模式下,在sql/py模式下是无法使用以下工具的!!!


1.单张表导出:

使用util.exportTable()导出表数据,格式为.txt,不包含表ddl

MySQL  localhost:3306  sbtest  JS > util.exportTable("sbtest.sbtest1","/root/dump/sbtest1.txt")

Initializing - done  

Gathering information - done  

Running data dump using 1 thread.

NOTE: Progress information uses estimated values and may not be accurate.

Starting data dump

101% (1000.00K rows / ~986.68K rows), 205.93K rows/s, 41.90 MB/s

Dump duration: 00:00:04s                                        

Total duration: 00:00:05s                                        

Data size: 193.89 MB                                            

Rows written: 1000000                                            

Bytes written: 193.89 MB                                        

Average throughput: 40.20 MB/s                                  


The dump can be loaded using:                                    

util.importTable("/root/dump/sbtest1.txt", {                    

   "characterSet": "utf8mb4",

   "schema": "sbtest",

   "table": "sbtest1"

})


查看数据库实例中的会话情况:

mysql> show full processlist;

171 root localhost:38614 NULL Query 5 executing SELECT SQL_NO_CACHE `id`,`k`,`c`,`pad` FROM `sbtest`.`sbtest1` ORDER BY `id` /* mysqlsh exportTable, dumping table `sbtest`.`sbtest1`, ID: whole table */


查看导出的数据文件:

[root@node223 dump]# tail sbtest1.txt  

999991 499926 48078400383-12437887379-05894846454-75431009730-43080345476-02156273531-11894487827-60773491204-81412444521-71178534764 80773871607-05666602654-92153175922-11639912217-68443483988

999992 574428 55798820788-64368227843-83930711931-75349649463-77481268670-92491399128-82755218472-10621103203-00337603974-88271784083 64445292191-55875703576-91916319139-35871876628-78905835320

999993 498523 31219540378-11117380521-63608278947-80878894462-62784565504-17171594760-59022510004-50276760630-69939737377-75984194755 90722580239-40139525614-74857329507-39255872805-87703425552

999994 500788 60088412733-66150161149-07679095523-93886097494-91085414423-37118101038-26687369933-39904039761-78397237090-88254975738 93579389646-16085583267-20689570549-96386736537-34098043860

999995 500811 61832498932-55639632152-75544792564-62962604113-51096999502-34375763358-16451904213-95024387564-36278547902-12168935797 87450703091-82521663665-74045694374-39248396157-89176450872

999996 501980 52749476711-51613131417-77229212961-74438037859-10531801216-00271136074-84566349858-87716479740-52528401366-75613716171 77741398916-31799130998-58627342379-28460887117-72371774996

999997 499975 23657049288-48467973021-44660863470-59852828073-49772289789-74992497687-86674747743-15984901956-97255179072-94276331108 90781100798-03422418782-38116505671-12299750678-29786053834

999998 509677 49579973118-89724303849-37917968823-05271741940-45125504150-56396698765-32949762304-97814218349-83232223974-76559385635 13054147720-91554662988-20646548050-88947240921-95366815262

999999 502178 17392917361-24051465506-10082568034-19440905307-91386383920-39500528342-19392204488-45200444455-38737392176-53065931451 35811861722-43646585166-92378684091-40505149902-00867953918

1000000 501619 41679588070-34331296496-01617870265-96191726321-02895424673-16024077576-86810569426-58365487466-59806723793-23424797814 65381170039-74019958543-26277800225-05018690950-63577292239



单表多进程导入:

Parallel Table Import Utility,配合Table Export Utility使用,支持单张表的多线程导入。

执行命令前,要把local_infile这个参数配置为ON:

mysql> show variables like "%local_infile%";

+---------------+-------+

| Variable_name | Value |

+---------------+-------+

| local_infile  | OFF   |

+---------------+-------+

1 row in set (0.06 sec)


mysql> set global local_infile=on;

Query OK, 0 rows affected (0.00 sec)


mysql> show variables like "%local_infile%";;

+---------------+-------+

| Variable_name | Value |

+---------------+-------+

| local_infile  | ON    |

+---------------+-------+

1 row in set (0.01 sec)


导入数据前,为了避免数据重复,先truncate 表:

MySQL  localhost:3306  JS > \sql

Switching to SQL mode... Commands end with ;

Fetching global names for auto-completion... Press ^C to stop.

MySQL  localhost:3306  SQL > use sbtest;

Default schema set to `sbtest`.

Fetching global names, object names from `sbtest` for auto-completion... Press ^C to stop.

MySQL  localhost:3306  sbtest  SQL > truncate table sbtest1;

Query OK, 0 rows affected (0.8715 sec)

MySQL  localhost:3306  sbtest  SQL > select * from sbtest1;

Empty set (0.0054 sec)


通过util.importTable()工具将导出的数据导入数据库中:

MySQL  localhost:3306  sbtest  JS > util.importTable("/root/dump/sbtest1.txt",{"schema":"sbtest","table":"sbtest1","threads":8})

Importing from file '/root/dump/sbtest1.txt' to table `sbtest`.`sbtest1` in MySQL Server at localhost:3306 using 4 threads

[Worker000] sbtest1.txt: Records: 226231  Deleted: 0  Skipped: 0  Warnings: 0

[Worker001] sbtest1.txt: Records: 257732  Deleted: 0  Skipped: 0  Warnings: 0

[Worker002] sbtest1.txt: Records: 257732  Deleted: 0  Skipped: 0  Warnings: 0

[Worker003] sbtest1.txt: Records: 258305  Deleted: 0  Skipped: 0  Warnings: 0

100% (193.89 MB / 193.89 MB), 0.00 B/s

File '/root/dump/sbtest1.txt' (193.89 MB) was imported in 35.4929 sec at 5.46 MB/s

Total rows affected in sbtest.sbtest1: Records: 1000000  Deleted: 0  Skipped: 0  Warnings: 0



查看数据库实例中的会话情况:

[root@node223 dump]# mysql -e "show full processlist"                  |

| 20 | root            | localhost:45398 | sbtest | Query   |   13 | executing              | LOAD DATA LOCAL INFILE '/root/dump/sbtest1.txt' INTO TABLE `sbtest`.`sbtest1` FIELDS TERMINATED BY ' ' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' |

| 21 | root            | localhost:45400 | sbtest | Query   |   13 | executing              | LOAD DATA LOCAL INFILE '/root/dump/sbtest1.txt' INTO TABLE `sbtest`.`sbtest1` FIELDS TERMINATED BY ' ' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' |

| 22 | root            | localhost:45402 | sbtest | Query   |   13 | executing              | LOAD DATA LOCAL INFILE '/root/dump/sbtest1.txt' INTO TABLE `sbtest`.`sbtest1` FIELDS TERMINATED BY ' ' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' |

| 23 | root            | localhost:45404 | sbtest | Query   |   13 | executing              | LOAD DATA LOCAL INFILE '/root/dump/sbtest1.txt' INTO TABLE `sbtest`.`sbtest1` FIELDS TERMINATED BY ' ' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' |



Instance Dump Utility

Instance Dump Utility,支持整个实例的多线程导出。

使用util.dumpInstance()备份全部数据(默认不备份数据库information_schema、mysql、performance_schema、sys数据库)


创建目录文件夹:

[root@node223 dumpinstance]# mkdir -p /root/dumpinstance

[root@node223 dumpinstance]# cd /root/dumpinstance/

[root@node223 dumpinstance]# pwd

/root/dumpinstance

MySQL  localhost:3306  sbtest  JS > util.dumpInstance("/root/dumpinstance",{"threads":8})

Acquiring global read lock

Global read lock acquired

Initializing - done  

4 out of 8 schemas will be dumped and within them 20 tables, 0 views.

5 out of 8 users will be dumped.

Gathering information - done  

All transactions have been started

Locking instance for backup

Global read lock has been released

Writing global DDL files

Writing users DDL

Running data dump using 8 threads.

NOTE: Progress information uses estimated values and may not be accurate.

Writing schema metadata - done        

NOTE: Table statistics not available for `test`.`t`, chunking operation may be not optimal. Please consider running 'ANALYZE TABLE `test`.`t`;' first.

Writing DDL - done          

Writing table metadata - done          

Starting data dump

106% (4.92M rows / ~4.64M rows), 429.43K rows/s, 22.98 MB/s uncompressed, 7.84 MB/s compressed                      

Dump duration: 00:00:12s                                                                      

Total duration: 00:00:13s                                                                      

Schemas dumped: 4                                                                              

Tables dumped: 20                                                                              

Uncompressed data size: 342.39 MB                                                              

Compressed data size: 125.40 MB                                                                

Compression ratio: 2.7                                                                        

Rows written: 4919069                                                                          

Bytes written: 125.40 MB                                                                      

Average uncompressed throughput: 26.43 MB/s                                                    

Average compressed throughput: 9.68 MB/s  


查看数据库实例中的会话情况:

[root@node223 ~]# mysql -e "show processlist"

| 38 | root            | localhost:45426 | test      | Query   |    4 | executing              | SELECT SQL_NO_CACHE `id`,`k`,`c`,`pad` FROM `sbtest`.`sbtest1` WHERE (`id` BETWEEN 333334 AND 666666 |

| 40 | root            | localhost:45430 | employees | Query   |    4 | executing              | SELECT SQL_NO_CACHE `emp_no`,`title`,`from_date`,`to_date` FROM `employees`.`titles` WHERE (`emp_no` |

| 41 | root            | localhost:45432 | employees | Query   |    4 | executing              | SELECT SQL_NO_CACHE `emp_no`,`salary`,`from_date`,`to_date` FROM `employees`.`salaries` WHERE (`emp_ |

| 43 | root            | localhost:45436 | employees | Query   |    4 | Sending to client      | SELECT SQL_NO_CACHE `emp_no`,`birth_date`,`first_name`,`last_name`,`gender`,`hire_date` FROM `employ |

| 44 | root            | localhost:45438 | employees | Query   |    4 | executing              | SELECT SQL_NO_CACHE `id`,`k`,`c`,`pad` FROM `sbtest`.`sbtest1` WHERE (`id` BETWEEN 666667 AND 999999 |

| 45 | root            | localhost:45440 | employees | Query   |    4 | executing              | SELECT SQL_NO_CACHE `id`,`k`,`c`,`pad` FROM `sbtest`.`sbtest1` WHERE (`id` BETWEEN 1 AND 333333) ORD |

| 48 | root            | localhost       | NULL      | Query   |    0 | init                   | show processlist    



查看导出的数据文件清单:

[root@node223 dumpinstance]# pwd

/root/dumpinstance

[root@node223 dumpinstance]# ls

@.done.json                            sbtest@sbtest1@@3.tsv.zst.idx  test@shukuinfo@0.tsv.zst.idx

employees@departments@@0.tsv.zst       sbtest@sbtest1.json            test@shukuinfo@@1.tsv.zst

employees@departments@@0.tsv.zst.idx   sbtest@sbtest1.sql             test@shukuinfo@@1.tsv.zst.idx

employees@departments.json             sbtest.sql                     test@shukuinfo.json

employees@departments.sql              skinfo.json                    test@shukuinfo.sql

employees@dept_emp@@0.tsv.zst          skinfo.sql                     test.sql

employees@dept_emp@@0.tsv.zst.idx      skinfo@t@@0.tsv.zst            test@student@@0.tsv.zst

employees@dept_emp.json                skinfo@t@@0.tsv.zst.idx        test@student@@0.tsv.zst.idx

employees@dept_emp.sql                 skinfo@t.json                  test@student.json

employees@dept_manager@@0.tsv.zst      skinfo@t.sql                   test@student.sql




Schema Dump Utility

Schema Dump Utility,支持库级别的多线程导出。

使用util.dumpInstance()备份指定数据库

 MySQL  localhost:3306  sbtest  JS > util.dumpSchemas(["sbtest"],"/root/dump_schema",{"threads":8})

Acquiring global read lock

Global read lock acquired

Initializing - done  

1 schemas will be dumped and within them 1 table, 0 views.

Gathering information - done  

All transactions have been started

Locking instance for backup

Global read lock has been released

Writing global DDL files

Running data dump using 8 threads.

NOTE: Progress information uses estimated values and may not be accurate.

Writing schema metadata - done        

Writing DDL - done        

Writing table metadata - done        

Starting data dump

101% (1000.00K rows / ~986.68K rows), 311.24K rows/s, 62.25 MB/s uncompressed, 28.29 MB/s compressed                

Dump duration: 00:00:03s                                                                            

Total duration: 00:00:03s                                                                            

Schemas dumped: 1                                                                                    

Tables dumped: 1                                                                                    

Uncompressed data size: 193.89 MB                                                                    

Compressed data size: 88.15 MB                                                                      

Compression ratio: 2.2                                                                              

Rows written: 1000000                                                                                

Bytes written: 88.15 MB                                                                              

Average uncompressed throughput: 57.58 MB/s                                                          

Average compressed throughput: 26.18 MB/s  


查看数据库实例中的会话情况:

[root@node223 dump_schema]# mysql -e "show full processlist"                                                            |

| 67 | root            | localhost:45464 | NULL   | Query   |    3 | executing              | SELECT SQL_NO_CACHE `id`,`k`,`c`,`pad` FROM `sbtest`.`sbtest1` WHERE (`id` BETWEEN 666667 AND 999999) ORDER BY `id` /* mysqlsh dumpSchemas, dumping table `sbtest`.`sbtest1`, ID: chunk 2 */ |

| 70 | root            | localhost:45470 | NULL   | Query   |    3 | Sending to client      | SELECT SQL_NO_CACHE `id`,`k`,`c`,`pad` FROM `sbtest`.`sbtest1` WHERE (`id` BETWEEN 333334 AND 666666) ORDER BY `id` /* mysqlsh dumpSchemas, dumping table `sbtest`.`sbtest1`, ID: chunk 1 */ |



查看导出的数据文件清单:

[root@node223 dump_schema]# pwd

/root/dump_schema

[root@node223 dump_schema]# ls

@.done.json  sbtest@sbtest1@0.tsv.zst      sbtest@sbtest1@2.tsv.zst       sbtest@sbtest1.json

@.json       sbtest@sbtest1@0.tsv.zst.idx  sbtest@sbtest1@2.tsv.zst.idx   sbtest@sbtest1.sql

@.post.sql   sbtest@sbtest1@1.tsv.zst      sbtest@sbtest1@@3.tsv.zst      sbtest.sql

sbtest.json  sbtest@sbtest1@1.tsv.zst.idx  sbtest@sbtest1@@3.tsv.zst.idx  @.sql



Table Dump Utility

Table Dump Utility,支持表级别的多线程导出。

使用util.dumpTables()备份指定表

现在sbtest库中有三张表,现在只备份其中两张表

mysql> use sbtest;

Database changed

mysql> show tables;

+------------------+

| Tables_in_sbtest |

+------------------+

| sbtest1          |

| sbtest2          |

| sbtest3          |

+------------------+

3 rows in set (0.01 sec)


创建文件备份目录:

[root@node223 ~]# mkdir -p dumptable

[root@node223 ~]# cd dumptable/

[root@node223 dumptable]# pwd

/root/dumptable

配置命令:

MySQL  localhost:3306  sbtest  JS > util.dumpTables("sbtest",["sbtest1","sbtest2"],"/root/dumptable",{"threads":8})

Acquiring global read lock

Global read lock acquired

Initializing - done  

2 tables and 0 views will be dumped.

Gathering information - done  

All transactions have been started

Locking instance for backup

Global read lock has been released

Writing global DDL files

Running data dump using 8 threads.

NOTE: Progress information uses estimated values and may not be accurate.

Writing schema metadata - done        

Writing DDL - done        

Writing table metadata - done        

Starting data dump

101% (1000.00K rows / ~986.68K rows), 216.96K rows/s, 45.21 MB/s uncompressed, 20.56 MB/s compressed                

Dump duration: 00:00:04s                                                                            

Total duration: 00:00:05s                                                                            

Schemas dumped: 1                                                                                    

Tables dumped: 2                                                                                    

Uncompressed data size: 193.89 MB                                                                    

Compressed data size: 88.15 MB                                                                      

Compression ratio: 2.2                                                                              

Rows written: 1000000                                                                                

Bytes written: 88.15 MB                                                                              

Average uncompressed throughput: 40.41 MB/s                                                          

Average compressed throughput: 18.37 MB/s  


查看数据库实例中的会话情况:

[root@node223 dump_schema]# mysql -e "show full processlist"

| 78 | root            | localhost:45480 | NULL   | Query   |    4 | executing              | SELECT SQL_NO_CACHE `id`,`k`,`c`,`pad` FROM `sbtest`.`sbtest1` WHERE (`id` BETWEEN 333334 AND 666666) ORDER BY `id` /* mysqlsh dumpTables, dumping table `sbtest`.`sbtest1`, ID: chunk 1 */ |

| 79 | root            | localhost:45482 | NULL   | Query   |    4 | executing              | SELECT SQL_NO_CACHE `id`,`k`,`c`,`pad` FROM `sbtest`.`sbtest1` WHERE (`id` BETWEEN 666667 AND 999999) ORDER BY `id` /* mysqlsh dumpTables, dumping table `sbtest`.`sbtest1`, ID: chunk 2 */ |

| 82 | root            | localhost:45488 | NULL   | Query   |    4 | executing              | SELECT SQL_NO_CACHE `id`,`k`,`c`,`pad` FROM `sbtest`.`sbtest1` WHERE (`id` BETWEEN 1 AND 333333) ORDER BY `id` /* mysqlsh dumpTables, dumping table `sbtest`.`sbtest1`, ID: chunk 0 */      |


查看导出的数据文件清单:

[root@node223 dumptable]# ls

@.done.json               sbtest@sbtest1@0.tsv.zst.idx  sbtest@sbtest1@@3.tsv.zst      sbtest@sbtest2@@0.tsv.zst.idx

@.json                    sbtest@sbtest1@1.tsv.zst      sbtest@sbtest1@@3.tsv.zst.idx  sbtest@sbtest2.json

@.post.sql                sbtest@sbtest1@1.tsv.zst.idx  sbtest@sbtest1.json            sbtest@sbtest2.sql

sbtest.json               sbtest@sbtest1@2.tsv.zst      sbtest@sbtest1.sql             sbtest.sql

sbtest@sbtest1@0.tsv.zst  sbtest@sbtest1@2.tsv.zst.idx  sbtest@sbtest2@@0.tsv.zst      @.sql



Dump Loading Utility

Dump Loading Utility,配合Instance Dump Utility、Schema Dump Utility、Table Dump Utility使用,支持整个实例、库级别、表级别的多线程导入。


恢复全部数据库

MySQL  localhost:3306  sbtest  JS > util.loadDump("/root/dumpinstance",{"threads":8})

Loading DDL and Data from '/root/dumpinstance' using 8 threads.

Opening dump...

Target is MySQL 8.0.33. Dump was produced from MySQL 8.0.33

Scanning metadata - done        

Checking for pre-existing objects...

ERROR: Schema `employees` already contains a table named departments

ERROR: Schema `employees` already contains a table named dept_emp

ERROR: Schema `employees` already contains a table named dept_manager

ERROR: Schema `employees` already contains a table named employees

ERROR: Schema `employees` already contains a table named salaries

ERROR: Schema `employees` already contains a table named titles

ERROR: Schema `skinfo` already contains a table named t

ERROR: Schema `test` already contains a table named course

ERROR: Schema `test` already contains a table named dept

ERROR: Schema `test` already contains a table named emp

ERROR: Schema `test` already contains a table named numbers

ERROR: Schema `test` already contains a table named score

ERROR: Schema `test` already contains a table named shukuinfo

ERROR: Schema `test` already contains a table named student

ERROR: Schema `test` already contains a table named t

ERROR: Schema `test` already contains a table named t1

ERROR: Schema `test` already contains a table named t2

ERROR: Schema `test` already contains a table named teacher

ERROR: Schema `test` already contains a table named test

ERROR: Schema `sbtest` already contains a table named sbtest1

ERROR: One or more objects in the dump already exist in the destination database. You must either DROP these objects or exclude them from the load.

Util.loadDump: While 'Scanning metadata': Duplicate objects found in destination database (MYSQLSH 53021)

注意:

恢复全部数据库,需要将除information_schema、mysql、performance_schema、sys数据库之后的全部库删除之后才能导入


删除全部非系统库:

mysql> show databases;

+--------------------+

| Database           |

+--------------------+

| employees          |

| information_schema |

| mysql              |

| performance_schema |

| sbtest             |

| skinfo             |

| sys                |

| test               |

+--------------------+

8 rows in set (0.03 sec)


mysql> drop database employees;

Query OK, 6 rows affected (0.49 sec)

mysql> drop database sbtest;

Query OK, 3 rows affected (0.31 sec)

mysql> drop database skinfo;

Query OK, 1 row affected (0.04 sec)

mysql> drop database test;

Query OK, 12 rows affected (0.51 sec)


mysql> show databases;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| mysql              |

| performance_schema |

| sys                |

+--------------------+

4 rows in set (0.00 sec)


再执行全库导入命令:

MySQL  localhost:3306  sbtest  JS > util.loadDump("/root/dumpinstance",{"threads":8})

Loading DDL and Data from '/root/dumpinstance' using 8 threads.

Opening dump...

Target is MySQL 8.0.33. Dump was produced from MySQL 8.0.33

NOTE: Load progress file detected. Load will be resumed from where it was left, assuming no external updates were made.

You may enable the 'resetProgress' option to discard progress for this MySQL instance and force it to be completely reloaded.

Scanning metadata - done        

Executing common preamble SQL

Executing DDL - done          

Executing view DDL - done        

Starting data load

1 thds loading \ 100% (342.39 MB / 342.39 MB), 778.36 KB/s, 19 / 20 tables done

Executing common postamble SQL                                                  

Recreating indexes - done        

29 chunks (4.92M rows, 342.39 MB) for 20 tables in 4 schemas were loaded in 3 min 39 sec (avg throughput 1.69 MB/s)

0 warnings were reported during the load.


查看数据库实例中的会话情况:

[root@node223 dump_schema]# mysql -e "show full processlist"                                                    |

| 106 | root            | localhost:45520 | employees | Query   |   37 | executing              | /* mysqlsh loadDump(), thread 7, table `employees`.`salaries`, chunk ID: 0 */ LOAD DATA LOCAL INFILE '/root/dumpinstance/employees@salaries@@0.tsv.zst' REPLACE INTO TABLE `employees`.`salaries` CHARACTER SET 'utf8mb4' FIELDS TERMINATED BY ' ' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' (`emp_no`, `salary`, `from_date`, `to_date`) |




恢复指定的数据库:


先删除数据库sbtest

mysql> drop database sbtest;


再执行单库导入命令:

MySQL  localhost:3306  sbtest  JS > util.loadDump("/root/dump_schema",{includeSchemas: ["sbtest"]})

Loading DDL and Data from '/root/dump_schema' using 4 threads.

Opening dump...

Target is MySQL 8.0.33. Dump was produced from MySQL 8.0.33

NOTE: Load progress file detected. Load will be resumed from where it was left, assuming no external updates were made.

You may enable the 'resetProgress' option to discard progress for this MySQL instance and force it to be completely reloaded.

Scanning metadata - done        

Executing common preamble SQL

Executing DDL - done        

Executing view DDL - done        

Starting data load

3 thds loading / 100% (193.89 MB / 193.89 MB), 5.73 MB/s, 1 / 1 tables done

Recreating indexes - done        

Executing common postamble SQL                                              

4 chunks (1000.00K rows, 193.89 MB) for 1 tables in 1 schemas were loaded in 34 sec (avg throughput 7.92 MB/s)

0 warnings were reported during the load.  


查看数据库实例中的会话情况:

[root@node223 dump_schema]# mysql -e "show full processlist"                                     |

| 125 | root            | localhost:45554 | sbtest | Query   |   22 | executing              | /* mysqlsh loadDump(), thread 2, table `sbtest`.`sbtest1`, chunk ID: 1 */ LOAD DATA LOCAL INFILE '/root/dump_schema/sbtest@sbtest1@1.tsv.zst' REPLACE INTO TABLE `sbtest`.`sbtest1` CHARACTER SET 'utf8mb4' FIELDS TERMINATED BY ' ' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' (`id`, `k`, `c`, `pad`) |

| 126 | root            | localhost:45556 | sbtest | Query   |   22 | executing              | /* mysqlsh loadDump(), thread 0, table `sbtest`.`sbtest1`, chunk ID: 2 */ LOAD DATA LOCAL INFILE '/root/dump_schema/sbtest@sbtest1@2.tsv.zst' REPLACE INTO TABLE `sbtest`.`sbtest1` CHARACTER SET 'utf8mb4' FIELDS TERMINATED BY ' ' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' (`id`, `k`, `c`, `pad`) |

| 127 | root            | localhost:45558 | sbtest | Query   |   22 | executing              | /* mysqlsh loadDump(), thread 1, table `sbtest`.`sbtest1`, chunk ID: 0 */ LOAD DATA LOCAL INFILE '/root/dump_schema/sbtest@sbtest1@0.tsv.zst' REPLACE INTO TABLE `sbtest`.`sbtest1` CHARACTER SET 'utf8mb4' FIELDS TERMINATED BY ' ' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' (`id`, `k`, `c`, `pad`) |



恢复指定表

在执行命令之前,要把旧的表删除,不然会报错,如下:

MySQL  localhost:3306  sbtest  JS > util.loadDump("/root/dumptable",{includeTables: ["sbtest.sbtest1","sbtest.sbtest2"]})

Loading DDL and Data from '/root/dumptable' using 4 threads.

Opening dump...

Target is MySQL 8.0.33. Dump was produced from MySQL 8.0.33

Scanning metadata - done        

Checking for pre-existing objects...

ERROR: Schema `sbtest` already contains a table named sbtest1

ERROR: One or more objects in the dump already exist in the destination database. You must either DROP these objects or exclude them from the load.

Util.loadDump: While 'Scanning metadata': Duplicate objects found in destination database (MYSQLSH 53021)


删除指定的表:

mysql> use sbtest;

Database changed

mysql> show tables;

+------------------+

| Tables_in_sbtest |

+------------------+

| sbtest1          |

+------------------+

1 row in set (0.01 sec)


mysql> drop table sbtest1;

Query OK, 0 rows affected (0.49 sec)


执行指定表的恢复命令:

MySQL  localhost:3306  sbtest  JS > util.loadDump("/root/dumptable",{includeTables: ["sbtest.sbtest1","sbtest.sbtest2"]})

Loading DDL and Data from '/root/dumptable' using 4 threads.

Opening dump...

Target is MySQL 8.0.33. Dump was produced from MySQL 8.0.33

NOTE: Load progress file detected. Load will be resumed from where it was left, assuming no external updates were made.

You may enable the 'resetProgress' option to discard progress for this MySQL instance and force it to be completely reloaded.

Scanning metadata - done        

Executing common preamble SQL

Executing DDL - done        

Executing view DDL - done        

Starting data load

3 thds loading \ 100% (193.89 MB / 193.89 MB), 6.07 MB/s, 2 / 2 tables done

Recreating indexes - done        

Executing common postamble SQL                                              

5 chunks (1000.00K rows, 193.89 MB) for 2 tables in 1 schemas were loaded in 30 sec (avg throughput 7.90 MB/s)

0 warnings were reported during the load.  


查看数据库实例中的会话情况:

[root@node223 dump_schema]# mysql -e "show full processlist"

| 138 | root            | localhost:45578 | sbtest | Query   |    5 | executing              | /* mysqlsh loadDump(), thread 3, table `sbtest`.`sbtest1`, chunk ID: 1 */ LOAD DATA LOCAL INFILE '/root/dumptable/sbtest@sbtest1@1.tsv.zst' REPLACE INTO TABLE `sbtest`.`sbtest1` CHARACTER SET 'utf8mb4' FIELDS TERMINATED BY ' ' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' (`id`, `k`, `c`, `pad`) |

| 139 | root            | localhost:45580 | sbtest | Query   |    5 | executing              | /* mysqlsh loadDump(), thread 1, table `sbtest`.`sbtest1`, chunk ID: 2 */ LOAD DATA LOCAL INFILE '/root/dumptable/sbtest@sbtest1@2.tsv.zst' REPLACE INTO TABLE `sbtest`.`sbtest1` CHARACTER SET 'utf8mb4' FIELDS TERMINATED BY ' ' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' (`id`, `k`, `c`, `pad`) |

| 140 | root            | localhost:45574 | sbtest | Query   |    5 | executing              | /* mysqlsh loadDump(), thread 0, table `sbtest`.`sbtest1`, chunk ID: 0 */ LOAD DATA LOCAL INFILE '/root/dumptable/sbtest@sbtest1@0.tsv.zst' REPLACE INTO TABLE `sbtest`.`sbtest1` CHARACTER SET 'utf8mb4' FIELDS TERMINATED BY ' ' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' (`id`, `k`, `c`, `pad`) |


总结

MySQL Shell Utilities相关的逻辑备份恢复工具,相比于mysqldump/mysqlpump单线程工具,对于大表(10G以上的单表)的数据导出或导入是非常有帮助的。

文章看完了,如果觉得本文对您的工作或生活有用,希望分享给你身边的朋友,一起学习,共同进步哈~~~


欢迎关注我的公众号【数库信息技术】,你的关注是我写作的动力源泉


各大平台都可以找到我:

————————————————————————————

公众号:数库信息技术

墨天轮:https://www.modb.pro/u/427810

百家号:https://author.baidu.com/home/1780697309880431

CSDN :https://blog.csdn.net/rscpass

51CTO: https://blog.51cto.com/u_16068254

博客园:https://www.cnblogs.com/shukuinfo

知乎:https://www.zhihu.com/people/shukuinfo

————————————————————————————