2D convolution with padding=same via Toeplitz matrix multiplication

倾然丶 夕夏残阳落幕 提交于 2020-08-06 06:06:54

问题


I'm trying to achieve the Block Toeplitz's matrix for a 2D convolution with padding=same (similar to keras). I saw, read and search a lot info, but I don't get an implementation of it.

Some references I have taken (also I'm reading papers, but anyone talks about convd with padding, only full or valid):

McLawrence's answer: answer. He says literally: "his is for padding = 0 but can easily be adjusted by changing h_blocks and w_blocks and W_conv[i+j, :, j, :]." But i dont know how implement this changes.

Warren's Weckesser answer: answer: Explains what is a block matrix.

Salvador's Dali answer: answer: Explains the method to perform the blockTeoeplitz's matrix for padding="valid" and also, Ali Salehi, explains the method for padding="full".

Modfying the code of McLawrence's answer I achieved the same result that keras conv2d with padding="same", but only for 2x2 kernel dimension and square input matrix. The code is:

k_h, k_w = kernel.shape
i_h, i_w = input.shape
o_h, o_w = input.shape

s_c = o_h-o_w
# construct 1d conv toeplitz matrices for each row of the kernel
toeplitz = []
for r in range(k_h):
    toeplitz.append(linalg.toeplitz(c=(kernel[r,0], *np.zeros(i_w-1)), r=(*kernel[r], *np.zeros(i_w-k_w))) ) 

# construct toeplitz matrix of toeplitz matrices (just for padding=0)
h_blocks, w_blocks = input.shape
h_block, w_block = toeplitz[0].shape

W_conv = np.zeros((h_blocks, h_block, w_blocks, w_block))

for i, B in enumerate(toeplitz):
    for j in range(o_h):
        if i == len(toeplitz)-1 and j == o_h-1:
            continue
        W_conv[j, :, i+j, :] = B

W_conv.shape = (h_blocks*h_block, w_blocks*w_block)
return W_conv

Any paper or reference that may be helpful?


回答1:


Hope this helps, this works for "same" padding:

def toeplitz_1_ch(kernel, input_size):
    # shapes
    k_h, k_w = kernel.shape
    i_h, i_w = input_size
    #o_h, o_w = i_h-k_h+1, i_w-k_w+1
    o_h, o_w = i_h, i_w
    # construct 1d conv toeplitz matrices for the kernel, with "same" padding
    n = i_h

    K1 = np.zeros((n,))
    K1[:2] = (kernel[1,1], kernel[1,2] )
    K2 = np.zeros((n,))
    K2[:2] = (kernel[1,1], kernel[1,0])

    K = linalg.toeplitz(c=K2, r = K1)
    KK = np.identity(n)

    L1 = np.zeros((n,))
    L1[:2] = (kernel[2,1], kernel[2,2])
    L2 = np.zeros((n,))
    L2[:2] = (kernel[2,1], kernel[2,0])

    t=np.zeros(n)
    s= np.zeros(n)
    s[1] = 1
    L=linalg.toeplitz(c=L2, r = L1)
    LL=linalg.toeplitz(r = s, c = t)

    A = np.kron(LL, L) + np.kron(KK, K)

    L1 = np.zeros((n,))
    L1[:2] = (kernel[0,1], kernel[0,2])
    L2 = np.zeros((n,))
    L2[:2] = (kernel[0,1], kernel[0,0])

    L=linalg.toeplitz(c=L2, r = L1)
    LL=linalg.toeplitz(c = s, r = t)
    A = A + np.kron(LL, L)
    return A

def toeplitz_mult_ch(kernel, input_size):
    """Compute toeplitz matrix for 2d conv with multiple in and out channels.
    Args:
        kernel: shape=(n_out, n_in, H_k, W_k)
        input_size: (n_in, H_i, W_i)"""

    kernel_size = kernel.shape
    output_size = (kernel_size[0], input_size[1], input_size[2])
    T = np.zeros((output_size[0], int(np.prod(output_size[1:])), input_size[0], int(np.prod(input_size[1:]))))

    for i,ks in enumerate(kernel):  # loop over output channel
        for j,k in enumerate(ks):  # loop over input channel
            T_k = toeplitz_1_ch(k, input_size[1:])
            T[i, :, j, :] = T_k
    T.shape = (np.prod(output_size), np.prod(input_size))

    return T

import torch
import torch.nn.functional as F
k = np.random.randn(4*3*3*3).reshape((4,3,3,3))
i = np.random.randn(3,9,9)
T = toeplitz_mult_ch(k, i.shape)
out = T.dot(i.flatten()).reshape((1,4,9,9))

# check correctness of convolution via toeplitz matrix
print(np.sum((out - F.conv2d(torch.tensor(i).view(1,3,9,9), torch.tensor(k), padding = 1).numpy())**2))


来源:https://stackoverflow.com/questions/60643786/2d-convolution-with-padding-same-via-toeplitz-matrix-multiplication

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!