Spark 1.4.1
I encounter a situation where grouping by a dataframe, then counting and filtering on the \'count\' column raises the exception below
import
I think a solution is to put count in back ticks
.filter("`count` >= 2")
http://mail-archives.us.apache.org/mod_mbox/spark-user/201507.mbox/%3C8E43A71610EAA94A9171F8AFCC44E351B48EDF@fmsmsx124.amr.corp.intel.com%3E