使用 ImageDataGenerator 进行多类分割时训练 U-Net 的问题
Posted
技术标签:
【中文标题】使用 ImageDataGenerator 进行多类分割时训练 U-Net 的问题【英文标题】:Problem with training U-Net while using ImageDataGenerator on multiclass segmentation 【发布时间】:2021-09-27 01:14:01 【问题描述】:我正在处理的任务是多类分割(每张图像上 0-3 个类)。我有一个有效的 U-Net 模型,可以很好地在小型数据集上进行训练,然后我扩充了数据集,现在我有近 15k 512x512 灰度图像。我自然遇到了没有足够的硬件资源(RAM、GPU)的问题,所以我决定切换到谷歌 colab 并使用 ImageDataGenerator。我遇到了这个问题目前无法解决。
InvalidArgumentError:Conv2DSlowBackpropInput:out_backprop 的大小 与计算不匹配:实际 = 16,计算 = 32 空间尺寸:2 输入:64 过滤器:2 输出:16 步幅:2 膨胀:1 [[节点 模型/conv2d_transpose_1/conv2d_transpose(定义在 /usr/local/lib/python3.7/dist-packages/keras/backend.py:5360) ]] [操作:__inference_train_function_3151]
对我来说唯一的解释是我没有很好地使用生成器。我将数据结构化为:
path_to_dataset
│
└───images_dir
│ │
│ └─── images_subdir
│ │ img1.png
│ │ img2.png
│ │ ...
│
└───masks_dir
│ │
│ └─── masks_subdir
│ │ img1.png
│ │ img2.png
│ │ ...
子目录只是为了让 ImageDataGenerator 工作。
data_gen_args = dict(rescale=1./255,)
image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)
# image_datagen.fit(images)
# mask_datagen.fit(masks)
# Provide the same seed and keyword arguments to the fit and flow methods
seed = 1
image_generator = image_datagen.flow_from_directory(
'/content/drive/MyDrive/DP/preprocess_images/images/final_ds/orig_folder/',
batch_size=16,
class_mode=None,
# color_mode='grayscale',
seed=seed)
mask_generator = mask_datagen.flow_from_directory(
'/content/drive/MyDrive/DP/preprocess_images/images/final_ds/seg_greyscale_folder/',
batch_size=16,
class_mode=None,
# color_mode='grayscale',
seed=seed)
# combine generators into one which yields image and masks
train_generator = zip(image_generator, mask_generator)
callbacks = [
ModelCheckpoint('unet_512.h5', verbose=1, save_best_only=True),
EarlyStopping(patience=5, monitor='val_loss'),
TensorBoard(log_dir='logs_unet512')
]
history = model.fit(train_generator,
verbose=1,
epochs=50,
callbacks=callbacks,
# class_weight=class_weights,
shuffle=False)
到目前为止,我还没有处理为验证数据创建数据生成器,因为我什至无法使这部分工作。
对于好奇的人,这里是模型。
# IMG_HEIGHT=512, IMG_WIDTH=512, IMG_CHANNELS=1
inputs = Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
s = inputs
# Contraction path
c1 = Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(s)
c1 = Dropout(0.1)(c1)
c1 = Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c1)
p1 = MaxPooling2D((2, 2))(c1)
c2 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p1)
c2 = Dropout(0.1)(c2)
c2 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c2)
p2 = MaxPooling2D((2, 2))(c2)
c3 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p2)
c3 = Dropout(0.2)(c3)
c3 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c3)
p3 = MaxPooling2D((2, 2))(c3)
c4 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p3)
c4 = Dropout(0.2)(c4)
c4 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c4)
p4 = MaxPooling2D(pool_size=(2, 2))(c4)
c5 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p4)
c5 = Dropout(0.3)(c5)
c5 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c5)
# Expansive path
u6 = Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(c5)
u6 = concatenate([u6, c4])
c6 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u6)
c6 = Dropout(0.2)(c6)
c6 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c6)
u7 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(c6)
u7 = concatenate([u7, c3])
c7 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u7)
c7 = Dropout(0.2)(c7)
c7 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c7)
u8 = Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(c7)
u8 = concatenate([u8, c2])
c8 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u8)
c8 = Dropout(0.1)(c8)
c8 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c8)
u9 = Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same')(c8)
u9 = concatenate([u9, c1], axis=3)
c9 = Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u9)
c9 = Dropout(0.1)(c9)
c9 = Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c9)
# n_classes=4
outputs = Conv2D(n_classes, (1, 1), activation='softmax')(c9)
model = Model(inputs=[inputs], outputs=[outputs])
编辑:还计划增加过滤器的数量,到目前为止,我正在运行以前在我的个人笔记本电脑上工作的模型
【问题讨论】:
您是否尝试过确保您的机器和 colab 中的 tensorflow 版本相同?我假设您使用过 tensorflow 1.x 层,但 colab 默认使用 tf 2.x。 我还没有在我的本地机器上的环境中也有 tf 2.x。我希望,这不应该是一个问题。但是我认为我使它与自定义数据生成器一起工作(图像和掩码在生成数组并扩展之前。然后掩码单热编码和图像标准化)。回到 PC 后,我会仔细检查并在此处关闭问题(如果它确实有效)。 【参考方案1】:我没有找到使它与 keras 内置实现一起使用的方法,但是自定义生成器可以解决问题。似乎大部分任务都处理得很好,但是希望有一天会添加多类语义分割
【讨论】:
以上是关于使用 ImageDataGenerator 进行多类分割时训练 U-Net 的问题的主要内容,如果未能解决你的问题,请参考以下文章
使用 ImageDataGenerator 进行 Keras 数据增强(您的输入没有数据)
如何从大型 .h5 数据集中批量读取数据,使用 ImageDataGenerator 和 model.fit 进行预处理,所有这些都不会耗尽内存?
Keras ImageDataGenerator 不处理符号链接文件