I\'m trying to read a file in my hdfs. Here\'s a showing of my hadoop file structure.
hduser@GVM:/usr/local/spark/bin$ hadoop fs -ls -R /
drwxr-xr-x - hduser s
First, you need to run
export PYSPARK_PYTHON=python3.4 #what so ever is your python version
code
from pyspark.sql import SparkSession
from pyspark import SparkConf, SparkContext
spark = SparkSession.builder.appName("HDFS").getOrCreate()
sparkcont = SparkContext.getOrCreate(SparkConf().setAppName("HDFS"))
logs = sparkcont.setLogLevel("ERROR")
data = [('First', 1), ('Second', 2), ('Third', 3), ('Fourth', 4), ('Fifth', 5)]
df = spark.createDataFrame(data)
df.write.csv("hdfs:///mnt/data/")
print("Data Written")
To execute the code
spark-submit --master yarn --deploy-mode client