主要代码如下

val rdd=sc.newAPIHadoopRDD(hBaseConf, classOf[TableInputFormat], classOf[ImmutableBytesWritable],
      classOf[Result])
    import spark.implicits._
    val value :RDD[UserSchemaClass]= rdd.map(convertHive)
    val tempDS = value .toDF()
    tempDS.createTempView("test_table")
    spark.sql("desc test_table").show(false)
    spark.sql("select `name` from test_table limit 10").show(false)

这里描述一下converHive函数的一个功能:将查出来的数据进行过滤,符合条件返回  UserSchemaClass  不符合条件返回空

 提交运行报错如下

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.NullPointerException: Null value appeared in non-nullable field:
top level Product input object
If the schema is inferred from a Scala tuple/case class, or a Java bean, please try to use scala.Option[_] or other nullable types (e.g. java.lang.Integer instead of int/scala.Int).

原因是rdd为空时转换 DataFrame 或者 DataSet ,查询时会报错 ,所以转换时 不要将空转换

修改如下代码

val value :RDD[UserSchemaClass]= rdd.map(convertHive).filter(_!=null)