问题 I'm using SparkSQL in a Java application to do some processing on CSV files using Databricks for parsing. The data I am processing comes from different sources (Remote URL, local file, Google Cloud Storage), and I'm in the habit of turning everything into an InputStream so that I can parse and process data without knowing where it came from. All the documentation I've seen on Spark reads files from a path, e.g. SparkConf conf = new SparkConf().setAppName("spark-sandbox").setMaster("local");