reading a file in hdfs from pyspark

前端 未结 4 1588
太阳男子
太阳男子 2021-02-02 01:40

I\'m trying to read a file in my hdfs. Here\'s a showing of my hadoop file structure.

hduser@GVM:/usr/local/spark/bin$ hadoop fs -ls -R /
drwxr-xr-x   - hduser s         


        
相关标签:
4条回答
  • 2021-02-02 02:04

    You could access HDFS files via full path if no configuration provided.(namenodehost is your localhost if hdfs is located in local environment).

    hdfs://namenodehost/inputFiles/CountOfMonteCristo/BookText.txt
    
    0 讨论(0)
  • 2021-02-02 02:06

    There are two general way to read files in Spark, one for huge-distributed files to process them in parallel, one for reading small files like lookup tables and configuration on HDFS. For the latter, you might want to read a file in the driver node or workers as a single read (not a distributed read). In that case, you should use SparkFiles module like below.

    # spark is a SparkSession instance
    from pyspark import SparkFiles
    
    spark.sparkContext.addFile('hdfs:///user/bekce/myfile.json')
    with open(SparkFiles.get('myfile.json'), 'rb') as handle:
        j = json.load(handle)
        or_do_whatever_with(handle)
    
    0 讨论(0)
  • 2021-02-02 02:13

    Since you don't provide authority URI should look like this:

    hdfs:///inputFiles/CountOfMonteCristo/BookText.txt
    

    otherwise inputFiles is interpreted as a hostname. With correct configuration you shouldn't need scheme at all an use:

    /inputFiles/CountOfMonteCristo/BookText.txt
    

    instead.

    0 讨论(0)
  • 2021-02-02 02:15

    First, you need to run

    export PYSPARK_PYTHON=python3.4 #what so ever is your python version
    

    code

    from pyspark.sql import SparkSession
    from pyspark import SparkConf, SparkContext
    
    spark = SparkSession.builder.appName("HDFS").getOrCreate()
    sparkcont = SparkContext.getOrCreate(SparkConf().setAppName("HDFS"))
    logs = sparkcont.setLogLevel("ERROR")
    
    data = [('First', 1), ('Second', 2), ('Third', 3), ('Fourth', 4), ('Fifth', 5)]
    df = spark.createDataFrame(data)
    
    df.write.csv("hdfs:///mnt/data/")
    print("Data Written")
    

    To execute the code

    spark-submit --master yarn --deploy-mode client <py file>
    
    0 讨论(0)
提交回复
热议问题