Looking at the select()
function on the spark DataSet there are various generated function signatures:
(c1: TypedColumn[MyClass, U1],c2: TypedColumn
If you want the equivalent of ds.select(_.member)
just use map
:
case class MyClass(member: MyMember, foo: A, bar: B)
val ds: DataSet[MyClass] = ???
val members: DataSet[MyMember] = ds.map(_.member)
Edit: The argument for not using map
.
A more performant way of doing the same would be through a projection, and not use map
at all. You lose the compile-time type checking, but in exchange give the Catalyst query engine a chance to do something more optimized. As @Sim alludes to in his comment below, the primary optimization centers around not requiring whole contents of MyClass
to be deserialized from Tungsten memory space into JVM heap memory--just to call the accessor--and then serialize the result of _.member
back into Tungsten.
To make a more concrete example, let's redefine our data model like this:
// Make sure these are not nested classes
// (i.e. in a top level compilation units).
case class MyMember(something: Double)
case class MyClass(member: MyMember, foo: Int, bar: String)
These need to be case
classes so that SQLImplicits.newProductEncoder[T <: Product]
can provide us with an implicit Encoder[MyClass]
, required by the Dataset[T]
API.
Now we can make the example above more concrete:
val ds: Dataset[MyClass] = Seq(MyClass(MyMember(1.0), 2, "three")).toDS()
val membersMapped: Dataset[Double] = ds.map(_.member.something)
To see what's going on behind the scenes we use the explain()
method:
membersMapped.explain()
== Physical Plan ==
*(1) SerializeFromObject [input[0, double, false] AS value#19]
+- *(1) MapElements <function1>, obj#18: double
+- *(1) DeserializeToObject newInstance(class MyClass), obj#17: MyClass
+- LocalTableScan [member#12, foo#13, bar#14]
This makes the serialization to/from Tungsten explicitly evident.
Let's get to the same value using a projection[^1]:
val ds2: Dataset[Double] = ds.select($"member.something".as[Double])
ds2.explain()
== Physical Plan ==
LocalTableScan [something#25]
That's it! A single step[^2]. No serialization other than the encoding of MyClass
into the original Dataset.
[^1]: The reason the projection is defined as $"member.something"
rather than $"value.member.something"
has to do with Catalyst automatically projecting the members of a single column DataFrame.
[^2]: To be fair, the *
next to the steps in the first physical plan indicate they will be implemented by a WholeStageCodegenExec
whereby those steps become a single, on-the-fly compiled JVM function that has its own set of runtime optimizations applied to it. So in practice you'd have to empirically test the performance to really assess the benefits to each approach.
In the Scala DSL for select
, there are many ways to identify a Column
:
'name
$"name"
or col(name)
expr("nvl(name, 'unknown') as renamed")
To get a TypedColumn
from Column
you simply use myCol.as[T]
.
For example: ds.select(col("name").as[String])