I\'m trying to read a file in my hdfs. Here\'s a showing of my hadoop file structure.
hduser@GVM:/usr/local/spark/bin$ hadoop fs -ls -R /
drwxr-xr-x - hduser s
You could access HDFS files via full path if no configuration provided.(namenodehost is your localhost if hdfs is located in local environment).
hdfs://namenodehost/inputFiles/CountOfMonteCristo/BookText.txt
There are two general way to read files in Spark, one for huge-distributed files to process them in parallel, one for reading small files like lookup tables and configuration on HDFS. For the latter, you might want to read a file in the driver node or workers as a single read (not a distributed read). In that case, you should use SparkFiles
module like below.
# spark is a SparkSession instance
from pyspark import SparkFiles
spark.sparkContext.addFile('hdfs:///user/bekce/myfile.json')
with open(SparkFiles.get('myfile.json'), 'rb') as handle:
j = json.load(handle)
or_do_whatever_with(handle)
Since you don't provide authority URI should look like this:
hdfs:///inputFiles/CountOfMonteCristo/BookText.txt
otherwise inputFiles
is interpreted as a hostname. With correct configuration you shouldn't need scheme at all an use:
/inputFiles/CountOfMonteCristo/BookText.txt
instead.
First, you need to run
export PYSPARK_PYTHON=python3.4 #what so ever is your python version
code
from pyspark.sql import SparkSession
from pyspark import SparkConf, SparkContext
spark = SparkSession.builder.appName("HDFS").getOrCreate()
sparkcont = SparkContext.getOrCreate(SparkConf().setAppName("HDFS"))
logs = sparkcont.setLogLevel("ERROR")
data = [('First', 1), ('Second', 2), ('Third', 3), ('Fourth', 4), ('Fifth', 5)]
df = spark.createDataFrame(data)
df.write.csv("hdfs:///mnt/data/")
print("Data Written")
To execute the code
spark-submit --master yarn --deploy-mode client <py file>