I want to take duplicate records in a Spark scala Dataframe. for example, I want to take duplicate values based on 3 columns like \"id\", \"name\", \"age\".condition part co
col1 ..col2 should be of string type.
val window= Window.partitionBy(col1,col2,..)
findDuplicateRecordsDF.withColumn("count", count("*")
.over(window)
.where($"count">1)
.show()