问题
I have a Java Map
variable, say Map<String, String> singleColMap
. I want to add this Map
variable to a dataset as a new column value in Spark 2.2 (Java 1.8).
I tried the below code but it is not working:
ds.withColumn("cMap", lit(singleColMap).cast(MapType(StringType, StringType)))
Can some one help on this?
回答1:
You can use typedLit that was introducted in Spark 2.2.0, from the documentation:
The difference between this function and lit is that this function can handle parameterized scala types e.g.: List, Seq and Map.
So in this case, the following should be enough
ds.withColumn("cMap", typedLit(singleColMap))
回答2:
This can easily be solved in Scala with typedLit
, but I couldn't find a way to make that method to work in Java, because it requires a TypeTag
which I don't think it's even possible to create in Java.
However, I managed to mostly emulate in Java what typedLit
does, bar the type inference part, so I need to set the Spark type explicitly:
public static Column typedMap(Map<String, String> map) {
return new Column(Literal.create(JavaConverters.mapAsScalaMapConverter(map).asScala(), createMapType(StringType, StringType)));
}
Then it can be used like this:
ds.withColumn("cMap", typedMap(singleColMap))
来源:https://stackoverflow.com/questions/52417532/how-to-add-a-map-column-to-spark-dataset