using pyspark, read/write 2D images on hadoop file system

后端 未结 1 1812
不思量自难忘°
不思量自难忘° 2020-12-30 12:31

I want to be able to read / write images on an hdfs file system and take advantage of the hdfs locality.

I have a collection of images where each image is composed

相关标签:
1条回答
  • 2020-12-30 12:57

    I have found a solution that works : using the pyspark 1.2.0 binaryfile does the job. It is flagged as experimental, but I was able to read tiff images with a proper combination of openCV.

    import cv2
    import numpy as np
    
    # build rdd and take one element for testing purpose
    L = sc.binaryFiles('hdfs://localhost:9000/*.tif').take(1)
    
    # convert to bytearray and then to np array
    file_bytes = np.asarray(bytearray(L[0][1]), dtype=np.uint8)
    
    # use opencv to decode the np bytes array 
    R = cv2.imdecode(file_bytes,1)
    

    Note the help of pyspark :

    binaryFiles(path, minPartitions=None)
    
        :: Experimental
    
        Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.
    
        Note: Small files are preferred, large file is also allowable, but may cause bad performance.
    
    0 讨论(0)
提交回复
热议问题