mysql5.7 binlog

/*!*/;
# at 15937710
# at 15937814
#170526 13:00:15 server id 1  end_log_pos 15938129 CRC32 0x06901892 	Table_map: `service`.`apply` mapped to number 108
# at 15938129
#170526 13:00:15 server id 1  end_log_pos 15938780 CRC32 0x9cf36fca 	Update_rows: table id 108 flags: STMT_END_F
### UPDATE `service`.`apply`
### WHERE
###   @1='l4cnjqu7tenbe'
###   @2='l4b4jkdo1hku1'
### SET
###   @1='l4cnjqu7tenbe'
###   @2='l4b4jkdo1hku1'

# at 15938780
#170526 13:00:15 server id 1  end_log_pos 15938811 CRC32 0xadd8095d 	Xid = 162625692
COMMIT/*!*//*!*/;
# at 16786556
# at 16786658
#170526 13:01:02 server id 1  end_log_pos 16786973 CRC32 0xe3c07d87 	Table_map: `service`.`apply` mapped to number 108
# at 16786973
#170526 13:01:02 server id 1  end_log_pos 16787632 CRC32 0xcea6eab7 	Update_rows: table id 108 flags: STMT_END_F
### UPDATE `service`.`apply`
### WHERE



注意红色字体。

flush table with read lock锁住主库  这点最为重要。


可用的方法

1.MySQL单表恢复方法

http://www.cnblogs.com/billyxp/p/3460682.html

从过滤掉这一条语句,后同步

2.记一次MySQL删库的数据恢复

http://blog.csdn.net/gzlaiyonghao/article/details/53340475

ibdata1 文件恢复

3.MySQL 误操作后数据恢复(update,delete忘加where条件)

sed脚本需要看懂。

最好参考 http://www.cnblogs.com/gomysql/p/3582058.html  

4.MySQL【Update误操作】回滚

http://www.cnblogs.com/zhoujinyi/archive/2012/12/26/2834897.html

这个案例最佳。

5.案例 - 误删千万的表   

http://dadaman.blog.51cto.com/11373912/1933137

这个案例工具不错


还需要待更新


恢复方法,比较繁琐,实际效果不好,会丢失数据。

1.flush table with read lock锁住主库 

如果不锁库,后面的数据在不断的更新,即使你恢复到故障点,也要关注后续的数据变化。比如一个字段。

2.copy binlog

3.恢复到12点

4.生产建立temp表

5.然后通过code,更新状态

6.查找12点后执行的update语句,找出来,更新这一个字段,因为数据在一直变动。


具体操作

1.恢复到故障前的状态,比如12点 grep -B 15 'type'

2.但是注意,如果是where很细的条件,就是@带数字,所以查起来要小心


操作命令

mysqlbinlog --base64-output=DECODE-ROWS -v --start-datetime="2017-05-25 23:30:00" --stop-datetime="2017-05-26 11:36:59" mysql-bin.xx|grep -B 15 'type'

或者

mysqlbinlog --no-defaults -v -v --base64-output=DECODE-ROWS mysql-bin.xx|grep -B 15 'type'


使用的脚本


先来看看mysql binlog update的语句样子,mysql版本5.7

/*!*/;
# at 15937710
# at 15937814
#170526 13:00:15 server id 1  end_log_pos 15938129 CRC32 0x06901892 	Table_map: `a`.`apply` mapped to number 108
# at 15938129
#170526 13:00:15 server id 1  end_log_pos 15938780 CRC32 0x9cf36fca 	Update_rows: table id 108 flags: STMT_END_F
### UPDATE `a`.`apply`
### WHERE
###   @1='l4cnjqu7tenbe'
###   @2='l4b4jkdo1hku1'
###   @3='xxx'
###   @4='151226'
###   @5='2017-05-26 11:29:42'
......(如果字段很多的话,还有更多)
### SET
###   @1='l4cnjqu7tenbe'
###   @2='l4b4jkdo1hku1'
###   @3='xxx'
###   @4='151226'
###   @5='2017-05-26 11:29:42'
.......



python版本

可以参考http://wangwei007.blog.51cto.com/68019/1306940

python写的分析mysql binlog日志工具


# !/usr/bin/env python

# coding:utf-8

import os,sys,re

import logging

logging.basicConfig(level=logging.DEBUG,

                format='%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s',

                datefmt='%a, %d %b %Y %H:%M:%S',

                filename='myapp.log',

                filemode='w')


listo=[]

pattern=r'^BEGIN.*?[^(BEGIN|COMMIT)]*COMMIT.*?;$'

patterf=r"/\*!\*/[\w\W]*?/\*!\*/"

patterdc=r"###   @58=[^\s]+"

fname='/home/back/binlog/test1.sql'

dname='/home/back/binlog/test2.sql'

fp = file(fname)

content = fp.read()

#fp.close()

db_key = re.findall(patterf,content)

for i in db_key:

    if 'UPDATE' in i and 'xx' in i and '@58' in i:

        #logging.info('%s' %i)

        dc=re.findall(patterdc,i)

        if dc:

            if len(dc)> 0:

               print len(dc)

               listo.append(i)

print len(listo)

with open(dname,'w+') as f:

    for i in listo:

        f.write(i)

fp.close()



shell脚本

'''

# vim summarize_binlogs.sh

#!/bin/bash

BINLOG_FILE="mysqld-bin.000035"

START_TIME="2017-05-16 13:30:00"

STOP_TIME="2017-05-18 14:00:00"

mysqlbinlog --base64-output=decode-rows -vv --start-datetime="${START_TIME}"  --stop-datetime="${STOP_TIME}" ${BINLOG_FILE} | awk \

'BEGIN {s_type=""; s_count=0;count=0;insert_count=0;update_count=0;delete_count=0;flag=0;} \

{if(match($0, /#15.*Table_map:.*mapped to number/)) {printf "Timestamp : " $1 " " $2 " Table : " $(NF-4); flag=1} \

else if (match($0, /(### INSERT INTO .*..*)/)) {count=count+1;insert_count=insert_count+1;s_type="INSERT"; s_count=s_count+1;}  \

else if (match($0, /(### UPDATE .*..*)/)) {count=count+1;update_count=update_count+1;s_type="UPDATE"; s_count=s_count+1;} \

else if (match($0, /(### DELETE FROM .*..*)/)) {count=count+1;delete_count=delete_count+1;s_type="DELETE"; s_count=s_count+1;}  \

else if (match($0, /^(# at) /) && flag==1 && s_count>0) {print " Query Type : "s_type " " s_count " row(s) affected" ;s_type=""; s_count=0; }  \

else if (match($0, /^(COMMIT)/)) {print "[Transaction total : " count " Insert(s) : " insert_count " Update(s) : " update_count " Delete(s) : " \

delete_count "] \n+----------------------+----------------------+----------------------+----------------------+"; \

count=0;insert_count=0;update_count=0; delete_count=0;s_type=""; s_count=0; flag=0} } '

:wq

# chmod u+x summarize_binlogs.sh



Q1 : Which tables received highest number of insert/update/delete statements?

./summarize_binlogs.sh | grep Table |cut -d':' -f5| cut -d' ' -f2 | sort | uniq -c | sort -nr

      3 `sakila`.`payment_tmp`

      3 `sakila`.`country`

      2 `sakila`.`city`

      1 `sakila`.`address`


Q2 : Which table received the highest number of DELETE queries?

./summarize_binlogs.sh | grep -E 'DELETE' |cut -d':' -f5| cut -d' ' -f2 | sort | uniq -c | sort -nr

      2 `sakila`.`country`

      1 `sakila`.`payment_tmp`

      1 `sakila`.`city`

      1 `sakila`.`address`

Q3: How many insert/update/delete queries executed against sakila.country table?

./summarize_binlogs.sh | grep -i '`sakila`.`country`' | awk '{print $7 " " $11}' | sort -k1,2 | uniq -c

      2 `sakila`.`country` DELETE

      1 `sakila`.`country` INSERT


Q4: Give me the top 3 statements which affected maximum number of rows.

./summarize_binlogs.sh | grep Table | sort -nr -k 12 | head -n 3

Timestamp : #150116 13:42:13 Table : `sakila`.`payment_tmp` Query Type : INSERT 16049 row(s) affected

Timestamp : #150116 13:42:28 Table : `sakila`.`payment_tmp` Query Type : UPDATE 6890 row(s) affected

Timestamp : #150116 13:42:20 Table : `sakila`.`payment_tmp` Query Type : DELETE 5001 row(s) affected


Q5 : Find DELETE queries that affected more than 1000 rows.

./summarize_binlogs.sh | grep -E 'DELETE' | awk '{if($12>1000) print $0}'

Timestamp : #150116 13:42:20 Table : `sakila`.`payment_tmp` Query Type : DELETE 5001 row(s) affected

If we want to get all queries that affected more than 1000 rows.


./summarize_binlogs.sh | grep -E 'Table' | awk '{if($12>1000) print $0}'

Timestamp : #150116 13:42:13 Table : `sakila`.`payment_tmp` Query Type : INSERT 16049 row(s) affected

Timestamp : #150116 13:42:20 Table : `sakila`.`payment_tmp` Query Type : DELETE 5001 row(s) affected

Timestamp : #150116 13:42:28 Table : `sakila`.`payment_tmp` Query Type : UPDATE 6890 row(s) affected


'''


反思

1.审核平台

2.流程,svn钩子的状态回归,与脚本关联