Get HOG image features from OpenCV + Python?

前端 未结 7 542
小鲜肉
小鲜肉 2020-11-30 18:58

I\'ve read this post about how to use OpenCV\'s HOG-based pedestrian detector: How can I detect and track people using OpenCV?

I want to use HOG for detecting othe

相关标签:
7条回答
  • 2020-11-30 19:36

    Despite the fact that exist a method as said in previous answers:

    hog = cv2.HOGDescriptor()

    I would like to post a python implementation you can find on opencv's examples directory, hoping it can be useful to understand HOG funcionallity:

    def hog(img):
        gx = cv2.Sobel(img, cv2.CV_32F, 1, 0)
        gy = cv2.Sobel(img, cv2.CV_32F, 0, 1)
        mag, ang = cv2.cartToPolar(gx, gy)
        bin_n = 16 # Number of bins
        bin = np.int32(bin_n*ang/(2*np.pi))
    
        bin_cells = []
        mag_cells = []
    
        cellx = celly = 8
    
        for i in range(0,img.shape[0]/celly):
            for j in range(0,img.shape[1]/cellx):
                bin_cells.append(bin[i*celly : i*celly+celly, j*cellx : j*cellx+cellx])
                mag_cells.append(mag[i*celly : i*celly+celly, j*cellx : j*cellx+cellx])   
    
        hists = [np.bincount(b.ravel(), m.ravel(), bin_n) for b, m in zip(bin_cells, mag_cells)]
        hist = np.hstack(hists)
    
        # transform to Hellinger kernel
        eps = 1e-7
        hist /= hist.sum() + eps
        hist = np.sqrt(hist)
        hist /= norm(hist) + eps
    
        return hist
    

    Regards.

    0 讨论(0)
  • 2020-11-30 19:39

    If you want fast Python code for HOG features, I've ported the code to Cython: https://github.com/cvondrick/pyvision/blob/master/vision/features.pyx

    0 讨论(0)
  • 2020-11-30 19:42

    1. Get Inbuilt Documentation: Following command on your python console will help you know the structure of class HOGDescriptor:

     import cv2; 
     help(cv2.HOGDescriptor())
    

    2. Example Code: Here is a snippet of code to initialize an cv2.HOGDescriptor with different parameters (The terms I used here are standard terms which are well defined in OpenCV documentation here):

    import cv2
    image = cv2.imread("test.jpg",0)
    winSize = (64,64)
    blockSize = (16,16)
    blockStride = (8,8)
    cellSize = (8,8)
    nbins = 9
    derivAperture = 1
    winSigma = 4.
    histogramNormType = 0
    L2HysThreshold = 2.0000000000000001e-01
    gammaCorrection = 0
    nlevels = 64
    hog = cv2.HOGDescriptor(winSize,blockSize,blockStride,cellSize,nbins,derivAperture,winSigma,
                            histogramNormType,L2HysThreshold,gammaCorrection,nlevels)
    #compute(img[, winStride[, padding[, locations]]]) -> descriptors
    winStride = (8,8)
    padding = (8,8)
    locations = ((10,20),)
    hist = hog.compute(image,winStride,padding,locations)
    

    3. Reasoning: The resultant hog descriptor will have dimension as: 9 orientations X (4 corner blocks that get 1 normalization + 6x4 blocks on the edges that get 2 normalizations + 6x6 blocks that get 4 normalizations) = 1764. as I have given only one location for hog.compute().

    4. One more way to initialize is from xml file which contains all parameter values:

    hog = cv2.HOGDescriptor("hog.xml")
    

    To get an xml file one can do following:

    hog = cv2.HOGDescriptor()
    hog.save("hog.xml")
    

    and edit the respective parameter values in xml file.

    0 讨论(0)
  • 2020-11-30 19:46

    In python opencv you can compute hog like this:

     import cv2
     hog = cv2.HOGDescriptor()
     im = cv2.imread(sample)
     h = hog.compute(im)
    
    0 讨论(0)
  • 2020-11-30 19:48

    I would not recommend using HOG features for detecting objects other than pedestrians. In the original HOG paper by Dalal and Triggs, they specifically mentioned that their detector is built around pedestrian detection in allowing for significant degrees of freedom in the limbs while using strong structural hints around human body.

    Instead, try looking at OpenCV's HaarDetectObjects. You can learn how to train your own cascades here.

    0 讨论(0)
  • 2020-11-30 19:54

    Here is a solution that uses only OpenCV:

    import numpy as np
    import cv2
    import matplotlib.pyplot as plt
    
    img = cv2.cvtColor(cv2.imread("/home/me/Downloads/cat.jpg"),
                       cv2.COLOR_BGR2GRAY)
    
    cell_size = (8, 8)  # h x w in pixels
    block_size = (2, 2)  # h x w in cells
    nbins = 9  # number of orientation bins
    
    # winSize is the size of the image cropped to an multiple of the cell size
    hog = cv2.HOGDescriptor(_winSize=(img.shape[1] // cell_size[1] * cell_size[1],
                                      img.shape[0] // cell_size[0] * cell_size[0]),
                            _blockSize=(block_size[1] * cell_size[1],
                                        block_size[0] * cell_size[0]),
                            _blockStride=(cell_size[1], cell_size[0]),
                            _cellSize=(cell_size[1], cell_size[0]),
                            _nbins=nbins)
    
    n_cells = (img.shape[0] // cell_size[0], img.shape[1] // cell_size[1])
    hog_feats = hog.compute(img)\
                   .reshape(n_cells[1] - block_size[1] + 1,
                            n_cells[0] - block_size[0] + 1,
                            block_size[0], block_size[1], nbins) \
                   .transpose((1, 0, 2, 3, 4))  # index blocks by rows first
    # hog_feats now contains the gradient amplitudes for each direction,
    # for each cell of its group for each group. Indexing is by rows then columns.
    
    gradients = np.zeros((n_cells[0], n_cells[1], nbins))
    
    # count cells (border cells appear less often across overlapping groups)
    cell_count = np.full((n_cells[0], n_cells[1], 1), 0, dtype=int)
    
    for off_y in range(block_size[0]):
        for off_x in range(block_size[1]):
            gradients[off_y:n_cells[0] - block_size[0] + off_y + 1,
                      off_x:n_cells[1] - block_size[1] + off_x + 1] += \
                hog_feats[:, :, off_y, off_x, :]
            cell_count[off_y:n_cells[0] - block_size[0] + off_y + 1,
                       off_x:n_cells[1] - block_size[1] + off_x + 1] += 1
    
    # Average gradients
    gradients /= cell_count
    
    # Preview
    plt.figure()
    plt.imshow(img, cmap='gray')
    plt.show()
    
    bin = 5  # angle is 360 / nbins * direction
    plt.pcolor(gradients[:, :, bin])
    plt.gca().invert_yaxis()
    plt.gca().set_aspect('equal', adjustable='box')
    plt.colorbar()
    plt.show()
    

    I have used HOG descriptor computation and visualization to understand the data layout and vectorized the loops over groups.

    0 讨论(0)
提交回复
热议问题