带有彩色蒙版的语义图像分割
Posted
技术标签:
【中文标题】带有彩色蒙版的语义图像分割【英文标题】:Semantic Image Segmentation with colored masks 【发布时间】:2020-01-31 02:26:04 【问题描述】:所以我有一组带有彩色面具的图片,例如蓝色代表椅子,红色代表灯等。
由于我对这一切都不熟悉,所以我尝试过用 unet 模型来做这件事,我已经用 keras 处理了图像,就像这样。
def data_generator(img_path,mask_path,batch_size):
c=0
n = os.listdir(img_path)
m = os.listdir(mask_path)
random.shuffle(n)
while(True):
img = np.zeros((batch_size,256,256,3)).astype("float")
mask = np.zeros((batch_size,256,256,1)).astype("float")
for i in range(c,c+batch_size):
train_img = cv2.imread(img_path+"/"+n[i])/255.
train_img = cv2.resize(train_img,(256,256))
img[i-c] = train_img
train_mask = cv2.imread(mask_path+"/"+m[i],cv2.IMREAD_GRAYSCALE)/255.
train_mask = cv2.resize(train_mask,(256,256))
train_mask = train_mask.reshape(256,256,1)
mask[i-c]=train_mask
c+=batch_size
if(c+batch_size>=len(os.listdir(img_path))):
c=0
random.shuffle(n)
yield img,mask
现在仔细看,我认为这种方式不适用于我的面具,我尝试将面具处理为 rgb 颜色,但我的模型不会像那样训练。
模型。
def unet(pretrained_weights = None,input_size = (256,256,3)):
inputs = Input(input_size)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
drop4 = Dropout(0.5)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
drop5 = Dropout(0.5)(conv5)
up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
merge6 = concatenate([drop4,up6], axis = 3)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)
up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)
up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)
up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9)
model = Model(input = inputs, output = conv10)
model.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy'])
#model.summary()
if(pretrained_weights):
model.load_weights(pretrained_weights)
return model
所以我的问题是如何使用彩色图像蒙版训练模型。
编辑,我拥有的数据示例。
给定图像来训练模型
它的面具
以及每个这样的面具的百分比。
"water": 4.2, "building": 33.5, "road": 0.0
【问题讨论】:
您能澄清一下您的问题吗?你想达到什么目标?您希望您的神经网络能够找到带有图像的对象并标记它们吗? 实际上并不是给它们贴标签,而是告诉图片中项目的百分比。 图像面积百分比或对象数量? 图像区域的百分比,因为我必须使用该信息制作一个 json 文件 好的,现在你想做什么很清楚了。感谢您更新问题!这个主题对我来说看起来很深,可以为特定模型提供建议。尝试在 google colab 和 kaggle 网站上寻找用例。整体搜索satellite imagery feature detection
主题
【参考方案1】:
在语义分割问题中,每个像素都属于任何目标输出类/标签。因此,您的输出层conv10
应该将类总数 (n_classes) 作为 no._of_kernels 的值,将softmax
作为激活函数,如下所示:
conv10 = Conv2D(**n_classes**, 1, activation = 'softmax')(conv9)
在这种情况下,在编译u-net模型时,loss也应该改为categorical_crossentropy
。
model.compile(optimizer = Adam(lr = 1e-4), loss = 'categorical_crossentropy', metrics = ['accuracy'])
此外,您不应标准化您的真实标签/蒙版图像,而是可以如下编码:
train_mask = np.zeros((height, width, n_classes))
for c in range(n_classes):
train_mask[:, :, c] = (img == c).astype(int)
[我假设你有两个以上的真实输出类/标签,因为你提到你的面具包含水、道路、建筑物等不同的颜色;如果你只有两个类,那么你的模型配置很好,除了 train_mask 处理。]
【讨论】:
以上是关于带有彩色蒙版的语义图像分割的主要内容,如果未能解决你的问题,请参考以下文章