需要:各大网站每天都会产生大量的数据,数据中有用户访问网站的时间戳,IP地址,访问的域名,浏览器信息等等,现要求分析各个省份的上网人数(实际上就是在ip规则中查询用户IP的地址,再进行聚合)
实现:
1、 加载城市ip段信息,获取ip起始数字和结束数字,经度,维度
2、 加载日志数据,获取ip信息,然后转换为数字,和ip段比较
3、 比较的时候采用二分法查找,找到对应的经度和维度
4、 然后对经度和维度做单词计数
access.log的数据:
20090121000132095572000|125.213.100.123|show.51.com|/shoplist.php?phpfile=shoplist2.php&style=1&sex=137|Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Mozilla/4.0(Compatible Mozilla/4.0(Compatible-EmbeddedWB 14.59 http://bsalsa.com/ EmbeddedWB- 14.59 from: http://bsalsa.com/ )|http://show.51.com/main.php|
20090121000132124542000|117.101.215.133|www.jiayuan.com|/19245971|Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; TencentTraveler 4.0)|http://photo.jiayuan.com/index.php?uidhash=d1c3b69e9b8355a5204474c749fb76ef|__tkist=0; myloc=50%7C5008; myage=2009; PROFILE=14469674%3A%E8%8B%A6%E6%B6%A9%E5%92%96%E5%95%A1%3Am%3Aphotos2.love21cn.com%2F45%2F1b%2F388111afac8195cc5d91ea286cdd%3A1%3A%3Ahttp%3A%2F%2Fimages.love21cn.com%2Fw4%2Fglobal%2Fi%2Fhykj_m.jpg; last_login_time=1232454068; SESSION_HASH=8176b100a84c9a095315f916d7fcbcf10021e3af; RAW_HASH=008a1bc48ff9ebafa3d5b4815edd04e9e7978050; COMMON_HASH=45388111afac8195cc5d91ea286cdd1b; pop_1232093956=1232468896968; pop_time=1232466715734; pop_1232245908=1232469069390; pop_1219903726=1232477601937; LOVESESSID=98b54794575bf547ea4b55e07efa2e9e; main_search:14469674=%7C%7C%7C00; registeruid=14469674; REG_URL_COOKIE=http%3A%2F%2Fphoto.jiayuan.com%2Fshowphoto.php%3Fuid_hash%3D0319bc5e33ba35755c30a9d88aaf46dc%26total%3D6%26p%3D5; click_count=0%2C3363619
20090121000132406516000|117.101.222.68|gg.xiaonei.com|/view.jsp?p=389|Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; CIBA)|http://home.xiaonei.com/Home.do?id=229670724|_r01_=1; __utma=204579609.31669176.1231940225.1232462740.1232467011.145; __utmz=204579609.1231940225.1.1.utmccn=(direct)
IP.txt的数据:
202.98.58.174|202.98.58.252|3395435182|3395435260|亚洲|中国|重庆|重庆||电信|500100|China|CN|106.504962|29.533155
202.98.58.253|202.98.58.253|3395435261|3395435261|亚洲|中国|重庆|重庆|江北|电信|500105|China|CN|106.57434|29.60658
202.98.58.254|202.98.63.0|3395435262|3395436288|亚洲|中国|重庆|重庆||电信|500100|China|CN|106.504962|29.533155
202.98.63.1|202.98.63.1|3395436289|3395436289|亚洲|中国|重庆|重庆|渝北|电信|500112|China|CN|106.6307|29.7182
本地实现(两个文件都在本地):
package ip
import org.apache.spark.broadcast.Broadcast
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
/**
* @author WangLeiKai
* 2018/9/22 9:32
*/
object IPLocation1 {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("IPLocation1").setMaster("local[4]")
val sc = new SparkContext(conf)
//在Driver端获取到全部的IP规则数据(全部的IP规则数据在某一台机器上,跟Driver在同一台机器上)
//全部的IP规则在Driver端了(在Driver端的内存中了)
//读取规则文件
val rules = TestIP.readRules(args(0))
//发布广播变量
val broadcastRef: Broadcast[Array[(Long, Long, String)]] = sc.broadcast(rules)
//读取日志文件
val accseeLines: RDD[String] = sc.textFile(args(1))
val func = ( line:String) => {
val fields = line.split("[|]")
val ip =fields(1)
val ipNum = TestIP.ip2Long(ip)
//接受到driver端的广播变量的数据
val rulesInExecutor = broadcastRef.value
val index = TestIP.binarySearch(rulesInExecutor,ipNum)
var province = "未知"
if(index != -1){
province = rulesInExecutor(index)._3
}
(province,1)
}
val provinceAndOne = accseeLines.map(func)
val reduced = provinceAndOne.reduceByKey(_+_)
val r = reduced.collect().toBuffer
println(r)
sc.stop()
}
}
优化:
通过spark集群Driver端的收发数据实现以及广播变量的使用
package ip
import org.apache.spark.broadcast.Broadcast
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
/**
* @author WangLeiKai
* 2018/9/22 9:32
*/
object IPLocation2 {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("IPLocation2").setMaster("local[4]")
val sc = new SparkContext(conf)
//在Driver端获取到全部的IP规则数据(全部的IP规则数据在某一台机器上,跟Driver在同一台机器上)
//全部的IP规则在Driver端了(在Driver端的内存中了)
//从hdfs读取规则文件
val rulesLine: RDD[String] = sc.textFile("hdfs://hadoop-master:9000/data/ip/ip.txt")
val ipRulesRDD: RDD[(Long, Long, String)] = rulesLine.map(line => {
val fields = line.split("[|]")
val startNum = fields(2).toLong
val endNum = fields(3).toLong
val province = fields(6)
(startNum, endNum, province)
})
//将所又的结果收集到driver端
val rulesInDriver = ipRulesRDD.collect()
//发布广播变量
val broadcastRef: Broadcast[Array[(Long, Long, String)]] = sc.broadcast(rulesInDriver)
//读取日志文件
val accseeLines: RDD[String] = sc.textFile("hdfs://hadoop-master:9000/data/ip/access.log")
val provinceAndOne = accseeLines.map(log => {
val fields = log.split("[|]")
val ip =fields(1)
val ipNum = TestIP.ip2Long(ip)
//接受到driver端的广播变量的数据
val rulesInExecutor = broadcastRef.value
//进行二分法查找,通过Driver端的引用或取到Executor中的广播变量
//(该函数中的代码是在Executor中别调用执行的,通过广播变量的引用,就可以拿到当前Executor中的广播的规则了)
//Driver端广播变量的引用是怎样跑到Executor中的呢?
//Task是在Driver端生成的,广播变量的引用是伴随着Task被发送到Executor中的
val index = TestIP.binarySearch(rulesInExecutor,ipNum)
var province = "未知"
if(index != -1){
province = rulesInExecutor(index)._3
}
(province,1)
})
val reduced = provinceAndOne.reduceByKey(_+_)
println(reduced.collect().toBuffer)
sc.stop()
}
}
TestIp的代码:
package ip
import scala.io.{BufferedSource, Source}
/**
* @author WangLeiKai
* 2018/9/21 9:37
*/
object TestIP {
/**
* 将字符串的ip转换成十进制的长整形数据
* @param ip
* @return
*/
def ip2Long(ip:String): Long ={
val fragments: Array[String] = ip.split("[.]")
var ipNum = 0L
for (i <- 0 until fragments.length){
ipNum = fragments(i).toLong | ipNum << 8L
}
ipNum
}
//1.0.8.0|1.0.15.255|16779264|16781311|亚洲|中国|广东|广州||电信|440100|China|CN|113.280637|23.125178
/**
* 从ip文件中找到需要的数据 开始 结束 省份
* @param path
* @return
*/
def readRules(path: String): Array[(Long, Long, String)] = {
val bf: BufferedSource = Source.fromFile(path)
val lines: Iterator[String] = bf.getLines()
val rules: Array[(Long, Long, String)] = lines.map(line => {
val fields: Array[String] = line.split("[|]")
val startNum = fields(2).toLong
val endNum = fields(3).toLong
val province: String = fields(6)
(startNum, endNum, province)
}).toArray
rules
}
def binarySearch(lines: Array[(Long, Long, String)], ip: Long) : Int = {
var low = 0
var high = lines.length - 1
while (low <= high) {
val middle = (low + high) / 2
if ((ip >= lines(middle)._1) && (ip <= lines(middle)._2))
return middle
if (ip < lines(middle)._1)
high = middle - 1
else {
low = middle + 1
}
}
-1
}
def main(args: Array[String]): Unit = {
val l: Long = ip2Long("114.114.114.114")
val rules: Array[(Long, Long, String)] = readRules("d:/data/ip.txt")
val index: Int = binarySearch(rules,l)
val tp = rules(index)
val province: String = tp._3
println(province)
}
}
将文件放进MySQL数据库的实现:
/**
reduced.foreach(tp => {
//将数据写入到MySQL中
//问?在哪一端获取到MySQL的链接的?
//是在Executor中的Task获取的JDBC连接
val conn: Connection = DriverManager.getConnection("jdbc:mysql://localhost:3306/bigdata?charatorEncoding=utf-8", "root", "123568")
//写入大量数据的时候,有没有问题?
val pstm = conn.prepareStatement("...")
pstm.setString(1, tp._1)
pstm.setInt(2, tp._2)
pstm.executeUpdate()
pstm.close()
conn.close()
})
*/
//一次拿出一个分区(一个分区用一个连接,可以将一个分区中的多条数据写完在释放jdbc连接,这样更节省资源)
// reduced.foreachPartition(it => {
// val conn: Connection = DriverManager.getConnection("jdbc:mysql://localhost:3306/bigdata?characterEncoding=UTF-8", "root", "123568")
// //将数据通过Connection写入到数据库
// val pstm: PreparedStatement = conn.prepareStatement("INSERT INTO access_log VALUES (?, ?)")
// //将一个分区中的每一条数据拿出来
// it.foreach(tp => {
// pstm.setString(1, tp._1)
// pstm.setInt(2, tp._2)
// pstm.executeUpdate()
// })
// pstm.close()
// conn.close()
// })
上面两种都可以实现,第一种的是每一次放数据都连一次数据库,第二次的话是一个分区连一次数据库。
pom.xml文件的内容:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>wang</groupId>
<artifactId>HelloSpark</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<scala.version>2.11.8</scala.version>
<spark.version>2.2.1</spark.version>
<hadoop.version>2.8.3</hadoop.version>
<encoding>UTF-8</encoding>
</properties>
<dependencies>
<!-- 导入scala的依赖 -->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<!-- 导入spark的依赖 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- 指定hadoop-client API的版本 -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.38</version>
</dependency>
</dependencies>
<build>
<pluginManagement>
<plugins>
<!-- 编译scala的插件 -->
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
</plugin>
<!-- 编译java的插件 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
</plugin>
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<executions>
<execution>
<id>scala-compile-first</id>
<phase>process-resources</phase>
<goals>
<goal>add-source</goal>
<goal>compile</goal>
</goals>
</execution>
<execution>
<id>scala-test-compile</id>
<phase>process-test-resources</phase>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- 打jar插件 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
结果: