I am running a pyspark job in databricks cloud. I need to write some of the csv files to databricks filesystem (dbfs) as part of this job and also i need to use some of the
Try to use this:
def get_dbutils(spark): try: from pyspark.dbutils import DBUtils dbutils = DBUtils(spark) except ImportError: import IPython dbutils = IPython.get_ipython().user_ns["dbutils"] return dbutils dbutils = get_dbutils(spark)