How to use Hadoop InputFormats In Apache Spark?

后端 未结 2 1613
难免孤独
难免孤独 2021-02-20 06:13

I have a class ImageInputFormat in Hadoop which reads images from HDFS. How to use my InputFormat in Spark?

Here is my ImageInputFormat:

<
相关标签:
2条回答
  • 2021-02-20 07:07

    images all be stored in hadoopRDD ?

    yes, everything that will be saved in spark is as rdds

    can set the RDD capacity and when the RDD is full, the rest data will be stored in disk?

    Default storage level in spark is (StorageLevel.MEMORY_ONLY) ,use MEMORY_ONLY_SER, which is more space efficient. Please refer spark documentation > scala programming > RDD persistance

    Will the performance be influenced if the data is too big?

    As data size increases , it will effect performance too.

    0 讨论(0)
  • 2021-02-20 07:10

    The SparkContext has a method called hadoopFile. It accepts classes implementing the interface org.apache.hadoop.mapred.InputFormat

    Its description says "Get an RDD for a Hadoop file with an arbitrary InputFormat".

    Also have a look at the Spark Documentation.

    0 讨论(0)
提交回复
热议问题