Data Governance solution for Databricks, Synapse and ADLS gen2

前端 未结 3 1364
再見小時候
再見小時候 2021-02-10 19:25

I\'m new to data governance, forgive me if question lack some information.

Objective

We\'re building data lake & enterprise data warehouse from scratch for

3条回答
  •  再見小時候
    2021-02-10 20:01

    I am currently exploring Immuta and Privacera, so I can't yet comment in detail on differences between these two. So far, Immuta gave me better impression with it's elegant policy based setup.

    Still, there are ways to solve some of the issues you mentioned above without buying an external component:

    1. Security

    • For RLS, consider using Table ACLs, and giving access only to certain Hive views.

    • For getting access to data inside ADLS, look at enabling password pass-through on clusters. Unfortunately, then you disable Scala.

    • You still need to setup permissions on Azure Data Lake Gen 2, which is awful experience for giving permissions on existing child items.

    • Please avoid creating dataset copies with columns/rows subsets, as data duplication is never a good idea.

    2. Lineage

    • One option would be to look into Apache Atlas & Spline. Here is one example how to set this up https://medium.com/@reenugrewal/data-lineage-tracking-using-spline-on-atlas-via-event-hub-6816be0fd5c7
    • Unfortunately, Spline is still under development, even reproducing the setup mention in the article is not straight forward. Good news that Apache Atlas 3.0 has many available definitions to Azure Data Lake Gen 2 and other sources
    • In a few projects, I ended up creating custom logging of reads/writes (seems like you went on this path also). Based on these logs, I created a Power BI report to visualize the lineage.
    • Consider using Azure Data Factory for orchestration. With a proper ADF pipeline structure, you can have a high level lineage and help you see dependencies and rerun failed activities. You can read a bit more here: https://mrpaulandrew.com/2020/07/01/adf-procfwk-v1-8-complete-pipeline-dependency-chains-for-failure-handling/
    • Take a look at Marquez https://marquezproject.github.io/marquez/. Small open-source library that has some nice features, including data lineage.

    3. Data quality

    • Investigate Amazon Deequ - Scala only so far but has some nice predefined data quality functions.
    • In many projects, we ended up with writing integration tests, checking data quality between moving from bronze (raw) to silver (standardized). Nothing fancy, pure PySpark.

    4. Data life cycle management

    • One option is to use native data lake storage lifecycle management. That's not a viable alternative behind Delta/Parquet formats.

    • If you use Delta format, you can easier apply retention or pseudoanonymize

    • Second option, imagine that you have a table with information about all datasets (dataset_friendly_name, path, retention time, zone, sensitive_columns, owner, etc.). Your Databricks users use a small wrapper to read/write:

      DataWrapper.Read("dataset_friendly_name")

      DataWrapper.Write("destination_dataset_friendly_name")

    It's up to you then to implement the logging, data loading behind the scenes. In addition you can skip sensitive_columns, acts based on retention time (both available in dataset info table). Requires quite some effort

    • You can always expand this table to more advanced schema, add extra information about pipelines, dependencies, etc. (see 2.4)

    Hopefully you find something useful in my answer. It would be interesting to know which path you took.

提交回复
热议问题