问题
I'm attempting to train a Unet to provide each pixel of a 256x256 image with a label, similar to the tutorial given here. In the example, the predictions of the Unet are a (128x128x3) output where the 3 denotes one of the classifications assigned to each pixel. In my case, I need a (256x256x10) output having 10 different classifications (Essentially a one-hot encoded array for each pixel in the image).
I can load the images but I'm struggling to convert each image's corresponding segmentation mask to the correct format. I have created DataSets by defining a map function called process_path
which takes a saved numpy
representation of the mask and creates a tensor of dimension (256 256 10), but I get a ValueError
when I call model.fit
, telling me that it cannot call as_list
because the shape of the Tensor cannot be found:
# --------------------------------------------------------------------------------------
# DECODE A NUMPY .NPY FILE INTO THE REQUIRED FORMAT FOR TRAINING
# --------------------------------------------------------------------------------------
def decode_npy(npy):
filename = npy.numpy()
data = np.load(filename)
data = kerasUtils.to_categorical(data, 10)
return data
# --------------------------------------------------------------------------------------
# DECODE AN IMAGE (PNG) FILE INTO THE REQUIRED FORMAT FOR TRAINING
# --------------------------------------------------------------------------------------
def decode_img(img):
img = tf.image.decode_png(img, channels=3)
return tf.image.convert_image_dtype(img, tf.float32)
# --------------------------------------------------------------------------------------
# PROCESS A FILE PATH FOR THE DATASET
# input - path to an image file
# output - an input image and output mask
# --------------------------------------------------------------------------------------
def process_path(filePath):
parts = tf.strings.split(filePath, '/')
fileName = parts[-1]
parts = tf.strings.split(fileName, '.')
prefix = tf.convert_to_tensor(maskDir, dtype=tf.string)
suffix = tf.convert_to_tensor("-mask.png", dtype=tf.string)
maskFileName = tf.strings.join((parts[-2], suffix))
maskPath = tf.strings.join((prefix, maskFileName), separator='/')
# load the raw data from the file as a string
img = tf.io.read_file(filePath)
img = decode_img(img)
mask = tf.py_function(decode_npy, [maskPath], tf.float32)
return img, mask
trainDataSet = allDataSet.take(trainSize)
trainDataSet = trainDataSet.map(process_path).batch(4)
validDataSet = allDataSet.skip(trainSize)
validDataSet = validDataSet.map(process_path).batch(4)
How can I take each images' corresponding (256 256 3) segmentation mask (stored as png) and convert it to a (256 256 10) tensor, where the i-th channel represents the pixels value as in the tutorial? Can anyone explain how this is achieved, either in the process_path
function or wherever it would be most efficient to perform the conversion?
Update:
Here is an example of a segmentation mask. Every mask contains the same 10 colours shown:
回答1:
import numpy as np
from cv2 import imread
im = imread('hfoa7.png', 0) # read as grayscale to get 10 unique values
n_classes = 10
one_hot = np.zeros((im.shape[0], im.shape[1], n_classes))
for i, unique_value in enumerate(np.unique(im)):
one_hot[:, :, i][im == unique_value] = 1
hfao7 is the name of the image you posted. This code snippet creates a one-hot matrix from the image.
You will want to insert this code into decode_npy()
. However, since you sent me a png, the code above won't work with a npy file. You could pass in the names of the pngs instead of the npys instead. Don't worry about using kerasUtils.to_categorical - the function I posted makes categorical labels.
回答2:
You can do this in pure Tensorflow, see my Blogpost: https://www.spacefish.biz/2020/11/rgb-segmentation-masks-to-classes-in-tensorflow/
来源:https://stackoverflow.com/questions/58609730/how-to-create-a-one-hot-encoded-matrix-from-a-png-for-per-pixel-classification-i