问题
Hi I am new to Azure data factory and not all familiar with the back-end processing that run behind the scenes. I am wondering if there is a performance impact to running couple of data flows in parallel when compared to having all the transformations in one data flow.
I am trying to stage some data with a not exists transformation. i have to do it for multiple tables. when i test ran two data flows in parallel the clusters were brought up together for both the data flows simultaneously. But I am not sure if this the best approach to distribute the loading of tables across couple of data flows or to have all the transformations in one data flow
回答1:
1: If you execute data flows in a pipeline in parallel, ADF will spin-up separate Spark clusters for each based on the settings in your Azure Integration Runtime attached to each activity.
2: If you put all of your logic inside a single data flow, then it will all execute in that same job execution context on a single Spark cluster instance.
3: Another option is to execute the activities in serial in the pipeline. If you have set a TTL on the Azure IR configuration, then ADF will reuse the compute resources (VMs) but you will still a brand-new Spark context for each execution.
All are valid practices and which one you choose should be driven by your requirements for your ETL process.
No. 3 will likely take the longest time to execute end-to-end. But it does provide a clean separation of operations in each data flow step.
No. 2 could be more difficult to follow logically and doesn't give you much re-usability.
No. 1 is really similar to #3, but you run them all in parallel. Of course, not every end-to-end process can run in parallel. You may require a data flow to finish before starting the next, in which case you're back in #3 serial mode.
来源:https://stackoverflow.com/questions/58453457/multiple-data-flows-vs-all-transformations-in-one