本博客只用于学习,如果有错误的地方,恳请指正,如需转载请注明出处。
看机器学习也是有一段时间了,这两天终于勇敢地踏出了第一步,实现了HOG+SVM对图片分类,具体代码可以在github上下载,https://github.com/subicWang/HOG-SVM-classifer。大家都说HOG+SVM是在行人检测中很合拍的一对。至于为啥,我也讲不清楚。我猜想这么合拍的一对应用在图片分类上效果应该也不错吧,事实证明确实还行,速度挺快,分类正确率还行。我用的数据集是http://www.cs.toronto.edu/~kriz/cifar.html。图片特征HOG的提取过程,本文不做讲解,很多博客也肯定比我讲得清楚。那我就直接粘出我的代码吧,方便需要的人参考。
def getHOGfeat( image,stride = 8, orientations=8, pixels_per_cell=(8, 8),cells_per_block=(2, 2)):
cx, cy = pixels_per_cell
bx, by = cells_per_block
sx, sy = image.shape
n_cellsx = int(np.floor(sx // cx)) # number of cells in x
n_cellsy = int(np.floor(sy // cy)) # number of cells in y
n_blocksx = (n_cellsx - bx) + 1
n_blocksy = (n_cellsy - by) + 1
gx = zeros((sx, sy), dtype=np.double)
gy = zeros((sx, sy), dtype=np.double)
eps = 1e-5
grad = zeros((sx, sy, 2), dtype=np.double)
for i in xrange(1, sx-1):
for j in xrange(1, sy-1):
gx[i, j] = image[i, j-1] - image[i, j+1]
gy[i, j] = image[i+1, j] - image[i-1, j]
grad[i, j, 0] = arctan(gy[i, j] / (gx[i, j] + eps)) * 180 / math.pi
if gx[i, j] < 0:
grad[i, j, 0] += 180
grad[i, j, 0] = (grad[i, j, 0] + 360) % 360
grad[i, j, 1] = sqrt(gy[i, j] ** 2 + gx[i, j] ** 2)
normalised_blocks = np.zeros((n_blocksy, n_blocksx, by * bx * orientations))
for y in xrange(n_blocksy):
for x in xrange(n_blocksx):
block = grad[y*stride:y*stride+16, x*stride:x*stride+16]
hist_block = zeros(32, dtype=double)
eps = 1e-5
for k in xrange(by):
for m in xrange(bx):
cell = block[k*8:(k+1)*8, m*8:(m+1)*8]
hist_cell = zeros(8, dtype=double)
for i in xrange(cy):
for j in xrange(cx):
n = int(cell[i, j, 0] / 45)
hist_cell[n] += cell[i, j, 1]
hist_block[(k * bx + m) * orientations:(k * bx + m + 1) * orientations] = hist_cell[:]
normalised_blocks[y, x, :] = hist_block / np.sqrt(hist_block.sum() ** 2 + eps)
return normalised_blocks.ravel()
熟悉HOG特征提取过程的应该都能看懂,我就不注释了。简单的这样实现当然不能满足我的要求,我一直不能理解为啥这些特征提取算法中像素点的梯度只由水平和垂直的像素决定,周围的其他点就对该点没有作用吗?对此我做了一些实验,在下一篇分享。
来源:oschina
链接:https://my.oschina.net/u/4264342/blog/3220694