Parallelism in Spark Job server
问题 We are working on Qubole with Spark version 2.0.2. We have a multi-step process in which all the intermediate steps write their output to HDFS and later this output is used in the reporting layer. As per our use case, we want to avoid writing to HDFS and keep all the intermediate output as temporary tables in spark and directly write the final reporting layer output. For this implementation, we wanted to use Job server provided by Qubole but when we try to trigger multiple queries on the Job