1、项目背景介绍:

随着路由器上网的普及,越来越多的人在各个场合选择使用路由器上网,特别是在一些公共场所,例如网吧,酒店,饭店,旅馆,宾馆,洗浴中心等。这些公共场所的网络安全也日益受到各地网安的关注,各种问题也日益凸显。

如何鉴别上网人员的身份问题?
如何通过路由器来解决人群聚集的问题?
如何对上网人员的身份进行追踪?
如何通过公共路由器来获取指定人员的行动轨迹?
如何获取上网人员的网络内容?

等等这些问题都困扰着各地的网安部门。为此我司特为各地网安推出定制化的路由器,在网安指定的公共地点安装我司的路由器,可以追踪每个人的上网情况,通过路由器或者嗅探设备的mac地址以及经纬度的追踪,可以定位每个人员的上网大致方位,了解每个人的上网内容,做到实时的网页内容监控,地理位置的监控,上网设备的mac地址追踪,通过嗅探设备,实现上网设备的实时路线追踪,为各地网安解决各种定制化的任务。

时速小于90KM/小时,并且开着无线网,嗅探设备就能抓到你的mac地址

2、项目数据处理流程

zabbix如何监控路由器线路流量 监视路由器_apache

3、数据类型梳理

1、终端mac记录:

YT1013 MAC地址抓取记录表  "audit_mac_detail_" 表  length  25
获取到的所有的mac地址都会存入这种类型的文件中



YT1013=iumac,idmac,area_code,policeid,sumac,sdmac,datasource,netsite_type,	capture_time,
		netbar_wacode,		brandid,	cache_ssid,terminal_filed_strength,ssid_position,access_ap_mac,access_ap_channel,access_ap_
encryption_type,collection_equipment_id,collection_equipment_longitude,collection_equipment
_latitude,wxid,province_code,city_code,typename,security_software_orgcode

2、虚拟身份记录

YT1020 虚拟身份抓取记录表  virtual_detail表  length 22
获取到所有的虚拟身份都会存入这种类型的文件中

YT1020=mobile,iumac,idmac,area_code,policeid,netsite_type,sumac,account_type,soft_type,sdma
c,netbar_wacode,sessionid,ip_address,account,capture_time,collection_equipment_id,wxid,prov
ince_code,city_code,datasource,typename,security_software_orgcode

3、终端上下线记录

YT1023 终端上下线记录表  wifi_online表   length 51
获取到所有的终端上下线的记录都会存入到这种类型的文件中


YT1023=iumac,idmac,area_code,policeid,netsite_type,capture_time,sumac,		sdmac,		
netbar_wacode,auth_type,auth_account,			collection_equipment_id,	datafrom,				start_time,end_time,ip_address,src_ip,src_port_start,src_port_end,src_port_start_v6,src_por
t_end_v6,dst_ip,dst_ip_ipv6,dst_port,dst_port_v6,certificate_type,											certificate_code,app_company_name,app_software_name,app_version,appid,src_ipv6,longitude,latitude,sessionid,terminal_fieldstregth,x_coordinate,y_coordinate,name,imsi,imei_esn_meid,os_name,brand,model,network_app,port,wxid,province_code,city_code,typename,security_software_orgcode

4、搜索关键字记录

YT1033 搜索关键字记录表  searchkey_detail表   length  21
获取到所有的搜索记录关键字都会存入到这种类型的文件中



YT1033=iumac,idmac,area_code,policeid,netsite_type,capture_time,sumac,sdmac,netbar_wacode,src_ip,src_port,dst_ip,dst_port,search_groupid,http_domain,search_keyword,wxid,province_code,city_code,typename,security_software_orgcode

5、网页访问记录

YT1034 网页访问记录表 webpage_detail 表  length 24
获取到所有的网页访问记录都会存入到这种类型的文件中


YT1034=iumac,idmac,area_code,policeid,netsite_type,capture_time,sumac,sdmac,netbar_wacode,src_ip,src_port,dst_ip,dst_port,http_method,http_domain,http_action_match,web_url,http_categoryid,web_title,wxid,province_code,city_code,typename,security_software_orgcode

4、模拟数据产生

创建两个文件夹
mkdir  -p  /export/datas/destFile   # 拷贝后的文件
mkdir  -p /export/datas/sourceFile    #源文件
详见资料当中的filegenerate.jar
java -jar filegenerate.jar /export/datas/sourceFile/  /export/datas/destFile 1000

5、创建kafka的topic

bin/kafka-topics.sh --create  --replication-factor 2 --topic wifidata --zookeeper node01:2181,node02:2181,node03:2181 --partitions 6

6、创建maven工程并导入相应的jar包

<repositories>
        <repository>
            <id>cloudera</id>
            <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
        </repository>
    </repositories>
    <dependencies>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.6.0-mr1-cdh5.14.0</version>
            <exclusions>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.storm</groupId>
            <artifactId>storm-core</artifactId>
            <version>1.1.1</version>
         <!--   <scope>provided</scope>-->
        </dependency>

        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>0.10.0.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.storm</groupId>
            <artifactId>storm-kafka-client</artifactId>
            <version>1.1.1</version>
        </dependency>
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.41</version>
        </dependency>
        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>2.9.0</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>

        <dependency>
            <groupId>commons-io</groupId>
            <artifactId>commons-io</artifactId>
            <version>2.4</version>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-lang3</artifactId>
            <version>3.4</version>
        </dependency>

    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                    <encoding>utf-8</encoding>
                </configuration>
            </plugin>

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>1.4</version>
                <configuration>
                    <createDependencyReducedPom>true</createDependencyReducedPom>
                </configuration>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <transformers>
                                <transformer
                                        implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
                                <transformer
                                        implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                                    <mainClass>cn.itcast.storm.demo1.stormToHdfs.MainTopology</mainClass>
                                </transformer>
                            </transformers>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

7、定义flume配置文件并启动flume

配置kafka配置文件wifi.conf

#为我们的source channel  sink起名
a1.sources = r1
a1.channels = c1
a1.sinks = k1
#指定我们的source收集到的数据发送到哪个管道
a1.sources.r1.channels = c1
#指定我们的source数据收集策略
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /export/datas/destFile
a1.sources.r1.deletePolicy = immediate
a1.sources.r1.ignorePattern = ^(.)*\\.tmp$
#指定我们的channel为memory,即表示所有的数据都装进memory当中
a1.channels.c1.type = memory
#指定我们的sink为kafka  sink,并指定我们的sink从哪个channel当中读取数据
a1.sinks.k1.channel = c1
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = wifidata
a1.sinks.k1.kafka.bootstrap.servers = node01:9092,node02:9092,node03:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1

启动flume程序

bin/flume-ng agent -n a1 -c conf -f /export/servers/apache-flume-1.8.0-bin/conf/wifi.conf -Dflume.root.logger=INFO,console

8、开发我们的WifiTypeBolt

过滤掉字段不同的一些脏数据

private File file ;
	@Override
	public void prepare(Map stormConf, TopologyContext context) {
		file = new File("F:\\except.txt");
		super.prepare(stormConf, context);
	}
    @Override
    public void execute(Tuple input, BasicOutputCollector collector) {
        String wifiDataLine = input.getValues().get(4).toString();
        if(StringUtils.isNotEmpty(wifiDataLine)){
        //	  System.out.println(wifiDataLine);
              String[] split = wifiDataLine.split("@zolen@");
              if(null != split && split.length > 0){
              	 //切割之后,判断字符串长度,匹配黑白名单的用户mac,用户手机号,设备mac,获取设备经纬度等详细信息
                  switch (split.length) {
          		case 25:
          			//终端mac获取记录
          			collector.emit(new Values(wifiDataLine.replace("@zolen@", "\001")));
          			break;
          		case 22:
          			//虚拟身份 
          			collector.emit(new Values(wifiDataLine.replace("@zolen@", "\001")));
          			break;
          		case 51:
          			//终端上下线记录
          			collector.emit(new Values(wifiDataLine.replace("@zolen@", "\001")));
          			break;
          		case 21:
          			//搜索关键字记录
          			collector.emit(new Values(wifiDataLine.replace("@zolen@", "\001")));
          		case 24:
          			//网页访问内容
          			collector.emit(new Values(wifiDataLine.replace("@zolen@", "\001")));
          			break;
          		default:
          			//异常数据,需要处理异常数据,将异常数据保存起来,统计每个设备异常数据的量,适时进行设备检修
          			//保存到一个文件里面去
          			FileOperateUtils.writeLine(file, wifiDataLine);
          			break;
              }
      		}
        }
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
    	declarer.declare(new Fields("dataLine"));
    }

9、开发我们的WifiWarningBolt

实时告警我们的mac黑名单,手机号码黑名单等

/**
 * 告警触发bolt,通过数据匹配redis当中的数据,每条数据匹配redis当中的黑名单,进行实时告警
 * @author admin
 *
 */
public class WifiWarningBolt extends  BaseBasicBolt{

	@Override
	public void execute(Tuple input, BasicOutputCollector collector) {
		String dataLine = input.getStringByField("dataLine");
		if(StringUtils.isNotBlank(dataLine)){
			String[] split = dataLine.split("\001");
			if(null != split && split.length > 0){
				switch (split.length) {
				case 25:
					//终端mac获取记录
					//获取iumac黑名单
					collector.emit(new Values(dataLine,"YT1013"));
					break;
				case 22:
					//虚拟身份 
					//  手机号,账户名黑名单
					String mobile = split[0];
					collector.emit(new Values(dataLine,"YT1020"));
					break;
				case 51:
					//匹配iumac黑名单
					//终端上下线记录
					collector.emit(new Values(dataLine,"YT1023"));
					break;
				case 21:
					// 匹配iumac黑名单
					//搜索关键字记录
					collector.emit(new Values(dataLine,"YT1033"));
				case 24:
					//网页访问内容
					//匹配iumac黑名单
					collector.emit(new Values(dataLine,"YT1034"));
					break;
				default:
					//异常数据,需要处理异常数据,将异常数据保存起来,统计每个设备异常数据的量,适时进行设备检修
					//保存到一个文件里面去
					//FileOperateUtils.writeLine(file, wifiDataLine);
					break;
				}
			}
		}
	}

	@Override
	public void declareOutputFields(OutputFieldsDeclarer declarer) {
		declarer.declare(new Fields("writeFileBolt","dataType"));
	}


}

10、开发我们的WriteFileBolt

文件合并到本地一定大小之后,就将数据上传到hdfs上面去

public class WriteFileBolt extends  BaseBasicBolt{

	
	@Override
	public Map<String, Object> getComponentConfiguration() {
		//设置定时任务,每过五秒钟检测一下当前时间与今天的 23:59:59秒的
		Config conf = new Config();
        conf.put(conf.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 5);
        return conf;
	}
	
	@Override
	public void execute(Tuple input, BasicOutputCollector collector) {
			if(input.getSourceComponent().equals(Constants.SYSTEM_COMPONENT_ID) && input.getSourceStreamId().equals(Constants.SYSTEM_TICK_STREAM_ID)){
				//将昨日的所有数据都上传到hdfs上面去
				FileOperateUtils.uploadYestData();
			}else{
				String dataType=input.getStringByField("dataType");
				System.out.println("数据类型问"+dataType);
				String stringByField = input.getStringByField("writeFileBolt");
				try {
					FileOperateUtils.meargeToLargeFile(dataType, stringByField);
				} catch (Exception e1) {
					// TODO Auto-generated catch block
					e1.printStackTrace();
				}
			}
	}
	@Override
	public void declareOutputFields(OutputFieldsDeclarer declarer) {
		
	}

}

11、程序main函数入口

public class WifiTopo {

    public static void main(String[] args) throws  Exception{
        TopologyBuilder topologyBuilder = new TopologyBuilder();
        KafkaSpoutConfig.Builder<String, String> builder = KafkaSpoutConfig.builder("192.168.52.200:9092,192.168.52.201:9092,192.168.52.202:9092", "wifidata");
        builder.setGroupId("wifidataGroup");
        builder.setFirstPollOffsetStrategy(KafkaSpoutConfig.FirstPollOffsetStrategy.LATEST);
        KafkaSpoutConfig<String, String> config = builder.build();
        KafkaSpout spout = new KafkaSpout(config);
        topologyBuilder.setSpout("kafkaSpout",spout);
        topologyBuilder.setBolt("WifiTypeBolt", new WifiTypeBolt()).localOrShuffleGrouping("kafkaSpout");
        topologyBuilder.setBolt("wifiWarningBolt", new WifiWarningBolt()).localOrShuffleGrouping("WifiTypeBolt");
        topologyBuilder.setBolt("writeFileBolt", new WriteFileBolt()).localOrShuffleGrouping("wifiWarningBolt");
        Config submitConfig = new Config();
        if(args !=null && args.length > 0){
            submitConfig.setDebug(false);
            StormSubmitter submitter = new StormSubmitter();
            submitter.submitTopology(args[0],submitConfig,topologyBuilder.createTopology());
        }else{
            LocalCluster cluster = new LocalCluster();
            cluster.submitTopology("wifiDataMyPlatForm",submitConfig,topologyBuilder.createTopology());
        }
    }
}