Spark SQL “No input paths specified in jobs” when create DataFrame based on JSON file

后端 未结 4 580
自闭症患者
自闭症患者 2021-01-23 13:39

I am a beginner in Spark and I am trying to create a DataFrame based on the content of JSON file using PySpark by following the guide: http://spark.apache.org/docs/1.6.1/sql-pro

相关标签:
4条回答
  • 2021-01-23 14:29

    try to add file:// at the beginning of your absolute path: df = sqlContext.read.json("file:///user/ABC/examples/src/main/resources/people.json")

    0 讨论(0)
  • 2021-01-23 14:30

    If you are running your code on local mode then provide complete path of your file.
    Suppose your file location is "/user/ABC/examples/src/main/resources/people.json". Then your code should be like this.

    df =sqlContext.read.json("/user/ABC/examples/src/main/resources/people.json")
    

    If you are running your code yarn mode then check your file exist in HDFS and provide complete location

    df = sqlContext.read.json("/user/ABC/examples/src/main/resources/people.json")
    
    0 讨论(0)
  • 2021-01-23 14:36

    You must specify the file system protocol:

    • hdfs Hadoop File System (used by default)
    • file local file system
    • s3a / s3n AWS S3
    • swift

    But also, the path must exist where the Spark driver and worker(s) are executed.

    0 讨论(0)
  • 2021-01-23 14:40

    I run into this problem too, add "file://" or "hdfs://" works for me! Thanks for Jessika's answer!!!

    In conclusion, if your json file is in your local file system,use

    df = sqlContext.read.json("file:///user/ABC/examples/src/main/resources/people.json")
    

    else, if your json file is in hdfs, use

    df = sqlContext.read.json("hdfs://ip:port/user/ABC/examples/src/main/resources/people.json")
    
    0 讨论(0)
提交回复
热议问题