Is there a way to load multiple text files into a single dataframe using Databricks?
问题 I am trying to test a few ideas to recursively loop through all files in a folder and sub-folders, and load everything into a single dataframe. I have 12 different kinds of files, and the differences are based on the file naming conventions. So, I have file names that start with 'ABC', file names that start with 'CN', file names that start with 'CZ', and so on. I tried the following 3 ideas. import pyspark import os.path from pyspark.sql import SQLContext from pyspark.sql.functions import