How to mask columns using Spark 2?

喜欢而已 提交于 2019-12-10 00:30:40

问题


I have some tables in which I need to mask some of its columns. Columns to be masked vary from table to table and I am reading those columns from application.conf file.

For example, for employee table as shown below

+----+------+-----+---------+
| id | name | age | address |
+----+------+-----+---------+
| 1  | abcd | 21  | India   |
+----+------+-----+---------+
| 2  | qazx | 42  | Germany |
+----+------+-----+---------+

if we want to mask name and age columns then I get these columns in an sequence.

val mask = Seq("name", "age")

Expected values after masking are:

+----+----------------+----------------+---------+
| id | name           | age            | address |
+----+----------------+----------------+---------+
| 1  | *** Masked *** | *** Masked *** | India   |
+----+----------------+----------------+---------+
| 2  | *** Masked *** | *** Masked *** | Germany |
+----+----------------+----------------+---------+

If I have employee table an data frame, then what is the way to mask these columns?

If I have payment table as shown below and want to mask name and salary columns then I get mask columns in Sequence as

+----+------+--------+----------+
| id | name | salary | tax_code |
+----+------+--------+----------+
| 1  | abcd | 12345  | KT10     |
+----+------+--------+----------+
| 2  | qazx | 98765  | AD12d    |
+----+------+--------+----------+
val mask = Seq("name", "salary")

I tried something like this mask.foreach(c => base.withColumn(c, regexp_replace(col(c), "^.*?$", "*** Masked ***" ) ) ) but it did not returned anything.


Thanks to @philantrovert, I found out the solution. Here is the solution I used:

def maskData(base: DataFrame, maskColumns: Seq[String]) = {
    val maskExpr = base.columns.map { col => if(maskColumns.contains(col)) s"'*** Masked ***' as ${col}" else col }
    base.selectExpr(maskExpr: _*)
}

回答1:


Your statement

mask.foreach(c => base.withColumn(c, regexp_replace(col(c), "^.*?$", "*** Masked ***" ) ) )

will return a List[org.apache.spark.sql.DataFrame] which doesn't sound too good.

You can use selectExpr and generate your regexp_replace expression using :

base.show
+---+----+-----+-------+
| id|name|  age|address|
+---+----+-----+-------+
|  1|abcd|12345|  KT10 |
|  2|qazx|98765|  AD12d|
+---+----+-----+-------+

val mask = Seq("name", "age")
val expr = df.columns.map { col =>
   if (mask.contains(col) ) s"""regexp_replace(${col}, "^.*", "** Masked **" ) as ${col}"""
   else col
 }

This will generate an expression with regex_replace for the columns that are present in the Sequence mask

Array[String] = Array(id, regexp_replace(name, "^.*", "** Masked **" ) as name, regexp_replace(age, "^.*", "** Masked **" ) as age, address)

Now you can use selectExpr on the generated Sequence

base.selectExpr(expr: _*).show

+---+------------+------------+-------+
| id|        name|         age|address|
+---+------------+------------+-------+
|  1|** Masked **|** Masked **|  KT10 |
|  2|** Masked **|** Masked **|  AD12d|
+---+------------+------------+-------+



回答2:


The simplest and fastest way would be to use withColumn and simply overwrite the values in the columns with "*** Masked ***". Using your small example dataframe

val df = spark.sparkContext.parallelize( Seq (
  (1, "abcd", 12345, "KT10" ),
  (2, "qazx", 98765, "AD12d")
)).toDF("id", "name", "salary", "tax_code")

If you have a small number of columns to be masked, with known names, then you can simply do:

val mask = Seq("name", "salary")

df.withColumn("name", lit("*** Masked ***"))
  .withColumn("salary", lit("*** Masked ***"))

Otherwise, you need to create a loop:

var df2 = df
for (col <- mask){
  df2 = df2.withColumn(col, lit("*** Masked ***"))
}

Both these approaches will give you a result like this:

+---+--------------+--------------+--------+
| id|          name|        salary|tax_code|
+---+--------------+--------------+--------+
|  1|*** Masked ***|*** Masked ***|    KT10|
|  2|*** Masked ***|*** Masked ***|   AD12d|
+---+--------------+--------------+--------+



回答3:


Please check the code below. The key is the udf function.

val df = ss.sparkContext.parallelize( Seq (
  ("c1", "JAN-2017", 49 ),
  ("c1", "MAR-2017", 83),
)).toDF("city", "month", "sales")
df.show()

val mask = udf( (s : String) => {
  "*** Masked ***"
})

df.withColumn("city", mask($"city")).show`


来源:https://stackoverflow.com/questions/46331734/how-to-mask-columns-using-spark-2

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!