一、RDD依赖关系



### --- RDD依赖关系

~~~     RDD只支持粗粒度转换,即在大量记录上执行的单个操作。
~~~     将创建RDD的一系列Lineage(血统)记录下来,以便恢复丢失的分区。
~~~     RDD的Lineage会记录RDD的元数据信息和转换行为,
~~~     当该RDD的部分分区数据丢失时,可根据这些信息来重新运算和恢复丢失的数据分区。




spark 需要那些依赖包 spark依赖hadoop吗_大数据


### --- RDD和它依赖的父RDD(s)的关系有两种不同的类型,

~~~     即窄依赖(narrow dependency)和宽依赖(wide dependency)。
~~~     依赖有2个作用:其一用来解决数据容错;其二用来划分stage。
~~~     窄依赖:1:1 或 n:1
~~~     宽依赖:n:m;意味着有 shuffle
~~~     要能够准确、迅速的区分哪些算子是宽依赖;


spark 需要那些依赖包 spark依赖hadoop吗_hadoop_02


spark 需要那些依赖包 spark依赖hadoop吗_spark_03


### --- DAG(Directed Acyclic Graph) 有向无环图。

~~~     原始的RDD通过一系列的转换就就形成了DAG,
~~~     根据RDD之间的依赖关系的不同将DAG划分成不同的Stage:
~~~     对于窄依赖,partition的转换处理在Stage中完成计算
~~~     对于宽依赖,由于有Shuffle的存在,只能在parent RDD处理完成后,才能开始接下来的计算
~~~     宽依赖是划分Stage的依据


spark 需要那些依赖包 spark依赖hadoop吗_hadoop_04


### --- RDD任务切分中间分为:Driver programe、Job、Stage(TaskSet)和Task

~~~     Driver program:初始化一个SparkContext即生成一个Spark应用
~~~     Job:一个Action算子就会生成一个Job
~~~     Stage:根据RDD之间的依赖关系的不同将Job划分成不同的Stage,遇到一个宽依赖则划分一个Stage
~~~     Task:Stage是一个TaskSet,将Stage划分的结果发送到不同的Executor执行即为一个Task
~~~     Task是Spark中任务调度的最小单位;
~~~     每个Stage包含许多Task,这些Task执行的计算逻辑相同的,计算的数据是不同的
~~~     注意:Driver programe->Job->Stage-> Task每一层都是1对n的关系。


二、实验实例


### --- 实验实例

~~~     # 窄依赖
scala> val rdd1 = sc.parallelize(1 to 10, 1)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> val rdd2 = sc.parallelize(11 to 20, 1)
rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[1] at parallelize at <console>:24

scala> val rdd3 = rdd1.union(rdd2)
rdd3: org.apache.spark.rdd.RDD[Int] = UnionRDD[2] at union at <console>:27

scala> rdd3.dependencies.size
res0: Int = 2

scala> rdd3.dependencies
res1: Seq[org.apache.spark.Dependency[_]] = ArrayBuffer(org.apache.spark.RangeDependency@3a5ecb62, org.apache.spark.RangeDependency@58c9bef9)

~~~     # 打印rdd1的数据
scala> rdd3.dependencies(0).rdd.collect
res2: Array[_] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

~~~     # 打印rdd2的数据
scala> rdd3.dependencies(1).rdd.collect
res3: Array[_] = Array(11, 12, 13, 14, 15, 16, 17, 18, 19, 20)


~~~     # 宽依赖
scala> val random = new scala.util.Random
random: scala.util.Random = scala.util.Random@62167368

scala> val arr = (1 to 100).map(idx => random.nextInt(100))
arr: scala.collection.immutable.IndexedSeq[Int] = Vector(9, 5, 45, 62, 62, 12, 87, 75, 98, 25, 50, 49, 31, 27, 28, 70, 64, 84, 50, 78, 21, 66, 44, 52, 54, 51, 85, 35, 89, 2, 38, 25, 47, 37, 65, 95, 90, 40, 46, 20, 77, 44, 21, 92, 52, 53, 72, 98, 50, 74, 17, 17, 69, 38, 59, 0, 57, 64, 54, 65, 57, 16, 45, 11, 23, 77, 24, 61, 59, 0, 99, 15, 36, 95, 10, 57, 11, 92, 23, 75, 17, 85, 22, 47, 35, 64, 48, 7, 72, 71, 27, 62, 52, 29, 21, 74, 57, 17, 92, 84)

scala> val rdd1 = sc.makeRDD(arr).map((_, 1))
rdd1: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[4] at map at <console>:26

scala> val rdd2 = rdd1.reduceByKey(_+_)
rdd2: org.apache.spark.rdd.RDD[(Int, Int)] = ShuffledRDD[5] at reduceByKey at <console>:25

~~~     # 观察依赖
scala> rdd2.dependencies
res4: Seq[org.apache.spark.Dependency[_]] = List(org.apache.spark.ShuffleDependency@525aeba9)

scala> rdd2.dependencies(0).rdd.collect
res5: Array[_] = Array((9,1), (5,1), (45,1), (62,1), (62,1), (12,1), (87,1), (75,1), (98,1), (25,1), (50,1), (49,1), (31,1), (27,1), (28,1), (70,1), (64,1), (84,1), (50,1), (78,1), (21,1), (66,1), (44,1), (52,1), (54,1), (51,1), (85,1), (35,1), (89,1), (2,1), (38,1), (25,1), (47,1), (37,1), (65,1), (95,1), (90,1), (40,1), (46,1), (20,1), (77,1), (44,1), (21,1), (92,1), (52,1), (53,1), (72,1), (98,1), (50,1), (74,1), (17,1), (17,1), (69,1), (38,1), (59,1), (0,1), (57,1), (64,1), (54,1), (65,1), (57,1), (16,1), (45,1), (11,1), (23,1), (77,1), (24,1), (61,1), (59,1), (0,1), (99,1), (15,1), (36,1), (95,1), (10,1), (57,1), (11,1), (92,1), (23,1), (75,1), (17,1), (85,1), (22,1), (47,1), (35,1), (64,1), (48,1), (7,1), (72,1), (71,1), (27,1), (62,1), (52,1), (29,1), (2...

scala> rdd2.dependencies(0).rdd.dependencies(0).rdd.collect
res6: Array[_] = Array(9, 5, 45, 62, 62, 12, 87, 75, 98, 25, 50, 49, 31, 27, 28, 70, 64, 84, 50, 78, 21, 66, 44, 52, 54, 51, 85, 35, 89, 2, 38, 25, 47, 37, 65, 95, 90, 40, 46, 20, 77, 44, 21, 92, 52, 53, 72, 98, 50, 74, 17, 17, 69, 38, 59, 0, 57, 64, 54, 65, 57, 16, 45, 11, 23, 77, 24, 61, 59, 0, 99, 15, 36, 95, 10, 57, 11, 92, 23, 75, 17, 85, 22, 47, 35, 64, 48, 7, 72, 71, 27, 62, 52, 29, 21, 74, 57, 17, 92, 84)


### --- 再谈WordCount

scala> val rdd1 = sc.textFile("/wcinput/wc.txt")
rdd1: org.apache.spark.rdd.RDD[String] = /wcinput/wc.txt MapPartitionsRDD[7] at textFile at <console>:24

scala> val rdd2 = rdd1.flatMap(_.split("\\s+"))
rdd2: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[8] at flatMap at <console>:25

scala> val rdd3 = rdd2.map((_, 1))
rdd3: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[9] at map at <console>:25

scala> val rdd4 = rdd3.reduceByKey(_+_)
rdd4: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[10] at reduceByKey at <console>:25

scala> val rdd5 = rdd4.sortByKey()
rdd5: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[13] at sortByKey at <console>:25

scala> rdd5.count
res7: Long = 5


~~~     # 查看RDD的血缘关系

scala> rdd1.toDebugString
res8: String =
(2) /wcinput/wc.txt MapPartitionsRDD[7] at textFile at <console>:24 []
 |  /wcinput/wc.txt HadoopRDD[6] at textFile at <console>:24 []

scala> rdd5.toDebugString
res9: String =
(2) ShuffledRDD[13] at sortByKey at <console>:25 []
 +-(2) ShuffledRDD[10] at reduceByKey at <console>:25 []
    +-(2) MapPartitionsRDD[9] at map at <console>:25 []
       |  MapPartitionsRDD[8] at flatMap at <console>:25 []
       |  /wcinput/wc.txt MapPartitionsRDD[7] at textFile at <console>:24 []
       |  /wcinput/wc.txt HadoopRDD[6] at textFile at <console>:24 []


~~~     # 查看依赖

scala> rdd1.dependencies
res10: Seq[org.apache.spark.Dependency[_]] = List(org.apache.spark.OneToOneDependency@34757a4d)

scala> rdd1.dependencies(0).rdd
res11: org.apache.spark.rdd.RDD[_] = /wcinput/wc.txt HadoopRDD[6] at textFile at <console>:24

scala> rdd5.dependencies
res12: Seq[org.apache.spark.Dependency[_]] = List(org.apache.spark.ShuffleDependency@6fd2a6c0)

scala> rdd5.dependencies(0).rdd
res13: org.apache.spark.rdd.RDD[_] = ShuffledRDD[10] at reduceByKey at <console>:25


~~~     # 查看最佳优先位置

scala> val hadoopRDD = rdd1.dependencies(0).rdd
hadoopRDD: org.apache.spark.rdd.RDD[_] = /wcinput/wc.txt HadoopRDD[6] at textFile at <console>:24

scala> hadoopRDD.preferredLocations(hadoopRDD.partitions(0))
res14: Seq[String] = ArraySeq(hadoop02, hadoop01, hadoop03)


### --- 使用hdfs命令检查文件情况

~~~     # 使用 hdfs 命令检查文件情况
[root@hadoop02 ~]# hdfs fsck /wcinput/wc.txt -files -blocks -locations
Status: HEALTHY
 Total size:    77 B
 Total dirs:    0
 Total files:   1
 Total symlinks:        0
 Total blocks (validated):  1 (avg. block size 77 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:    0 (0.0 %)
 Under-replicated blocks:   1 (100.0 %)
 Mis-replicated blocks:     0 (0.0 %)
 Default replication factor:    5
 Average block replication: 3.0
 Corrupt blocks:        0
 Missing replicas:      2 (40.0 %)
 Number of data-nodes:      3
 Number of racks:       1

The filesystem under path '/wcinput/wc.txt' is HEALTHY


三、RDD依赖关系输出参数


### --- RDD依赖关系输出参数

~~~     # 问题:上面的WordCount中一共几个job,几个Stage,几个Task?


spark 需要那些依赖包 spark依赖hadoop吗_hive_05


四、执行wordcount查看job的执行


### --- 编程代码

val rdd1 = sc.textFile("/wcinput/wc.txt")
val rdd2 = rdd1.flatMap(_.split("\\s+"))
val rdd3 = rdd2.map((_, 1))
val rdd4 = rdd3.reduceByKey(_+_)
val rdd5 = rdd4.sortByKey()
rdd5.count


~~~     # 查看RDD的血缘关系

rdd1.toDebugString
rdd5.toDebugString


~~~     # 查看依赖

rdd1.dependencies
rdd1.dependencies(0).rdd
rdd5.dependencies
rdd5.dependencies(0).rdd


~~~     # 查看最佳优先位置

val hadoopRDD = rdd1.dependencies(0).rdd
hadoopRDD.preferredLocations(hadoopRDD.partitions(0))


~~~     # 使用 hdfs 命令检查文件情况

hdfs fsck /wcinput/wc.txt -files -blocks -locations


### --- 通过web-UI查看:本例中整个过程分为1个job,3个Stage;6个Task:http://hadoop02:8080/
### --- 为什么这里显示有2个job?参见RDD分区器

RunningApplications:app-20211019204252-0000——>Application Detail UI——>Completed Jobs:Description:count at<console>:26——>Details for job 7——>DAG Visualization——>END


spark 需要那些依赖包 spark依赖hadoop吗_hive_06


spark 需要那些依赖包 spark依赖hadoop吗_spark_07