reading data from URL using spark databricks platform

前端 未结 2 1489
野趣味
野趣味 2021-02-09 17:20

trying to read data from url using spark on databricks community edition platform i tried to use spark.read.csv and using SparkFiles but still, i am missing some simple point

2条回答
  •  醉话见心
    2021-02-09 17:56

    Above answer works but might be error prone some times SparkFiles.get will return null

    #1 is more prominent way of getting a file from any url or public s3 location


    Option 1 :

    IOUtils.toString will do the trick see the docs of apache commons io jar will be already present in any spark cluster whether its databricks or any other spark installation.

    Below is the scala way of doing this... I have taken a raw git hub csv file for this example ... can change based on the requirements.

    import org.apache.commons.io.IOUtils // jar will be already there in spark cluster no need to worry
    import java.net.URL 
    
    val urlfile=new URL("https://raw.githubusercontent.com/lrjoshi/webpage/master/public/post/c159s.csv")
      val testcsvgit = IOUtils.toString(urlfile,"UTF-8").lines.toList.toDS()
      val testcsv = spark
                    .read.option("header", true)
                    .option("inferSchema", true)
                    .csv(testcsvgit)
      testcsv.show
    

    Result :

    +-----------+------+----+----+---+-----+
    |Experiment |Virus |Cell| MOI|hpi|Titer|
    +-----------+------+----+----+---+-----+
    |      EXP I| C159S|OFTu| 0.1|  0| 4.75|
    |      EXP I| C159S|OFTu| 0.1|  6| 2.75|
    |      EXP I| C159S|OFTu| 0.1| 12| 2.75|
    |      EXP I| C159S|OFTu| 0.1| 24|  5.0|
    |      EXP I| C159S|OFTu| 0.1| 48|  5.5|
    |      EXP I| C159S|OFTu| 0.1| 72|  7.0|
    |      EXP I| C159S| STU| 0.1|  0| 4.75|
    |      EXP I| C159S| STU| 0.1|  6| 3.75|
    |      EXP I| C159S| STU| 0.1| 12|  4.0|
    |      EXP I| C159S| STU| 0.1| 24| 3.75|
    |      EXP I| C159S| STU| 0.1| 48| 3.25|
    |      EXP I| C159S| STU| 0.1| 72| 3.25|
    |      EXP I| C159S|OFTu|10.0|  0|  6.5|
    |      EXP I| C159S|OFTu|10.0|  6| 4.75|
    |      EXP I| C159S|OFTu|10.0| 12| 4.75|
    |      EXP I| C159S|OFTu|10.0| 24| 6.25|
    |      EXP I| C159S|OFTu|10.0| 48|  6.5|
    |      EXP I| C159S|OFTu|10.0| 72|  7.0|
    |      EXP I| C159S| STU|10.0|  0|  7.0|
    |      EXP I| C159S| STU|10.0|  6| 4.75|
    +-----------+------+----+----+---+-----+
    only showing top 20 rows
    

    Option 2 : in Scala

    import java.net.URL
    import org.apache.spark.SparkFiles
    val urlfile="https://raw.githubusercontent.com/lrjoshi/webpage/master/public/post/c159s.csv"
    spark.sparkContext.addFile(urlfile)
    
    val df = spark.read
    .option("inferSchema", true)
    .option("header", true)
    .csv("file://"+SparkFiles.get("c159s.csv"))
    df.show
    

    Result : Will be same as Option #1

提交回复
热议问题