Passing case class into function arguments

时光怂恿深爱的人放手 提交于 2019-12-11 16:16:05

问题


sorry for asking a simple question. I want to pass a case class to a function argument and I want to use it further inside the function. Till now I have tried this with TypeTag and ClassTag but for some reason, I am unable to properly use it or may be I am not looking at the correct place.

Use cases is something similar to this:

case class infoData(colA:Int,colB:String)
case class someOtherData(col1:String,col2:String,col3:Int)

def readCsv[T:???](path:String,passedCaseClass:???): Dataset[???] = {
  sqlContext
    .read
    .option("header", "true")
    .csv(path)
    .as[passedCaseClass]
}

It will be called something like this:

val infoDf = readCsv("/src/main/info.csv",infoData)
val otherDf = readCsv("/src/main/someOtherData.csv",someOtherData)

回答1:


First change your function definition to:

object t0 {
    def readCsv[T] (path: String)(implicit spark: SparkSession, encoder: Encoder[T]): Dataset[T] = {
      spark
        .read
        .option("header", "true")
        .csv(path)
        .as[T]
    }
}

You don´t need to perform any kind of reflection to create a generic readCsv function. The key here is that Spark needs the encoder at compile time. So you can pass it as implicit parameter and the compiler will add it.

Because Spark SQL can deserialize product types(your case classes) including the default encoders, it is easy to call your function like:

case class infoData(colA: Int, colB: String)
case class someOtherData(col1: String, col2: String, col3: Int)

object test {
  import t0._

  implicit val spark = SparkSession.builder().getOrCreate()

  import spark.implicits._
  readCsv[infoData]("/tmp")

}

Hope it helps




回答2:


There are two things which you should pay attention to,

  1. class names should be in CamelCase, so InfoData.
  2. Once you have bound a type to a DataSet, its not a DataFrame. DataFrame is a special name for a DataSet of general purpose Row.

What you need is to ensure that your provided class has an implicit instance of corresponding Encoder in current scope.

case class InfoData(colA: Int, colB: String)

Encoder instances for primitive types (Int, String, etc) and case classes can be obtained by importing spark.implicits._

def readCsv[T](path: String)(implicit encoder: Encoder: T): Dataset[T] = {
  spark
    .read
    .option("header", "true")
    .csv(path)
    .as[T]
}

Or, you can use context bound,

def readCsv[T: Encoder[T]](path: String): Dataset[T] = {
  spark
    .read
    .option("header", "true")
    .csv(path)
    .as[T]
}

Now, you can use it as following,

val spark = ...

import spark.implicits._

def readCsv[T: Encoder[T]](path: String): Dataset[T] = {
  spark
    .read
    .option("header", "true")
    .csv(path)
    .as[T]
}

val infoDS = readCsv[InfoData]("/src/main/info.csv")


来源:https://stackoverflow.com/questions/53591578/passing-case-class-into-function-arguments

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!