如何使用 Keras 在 TensorBoard 中显示自定义图像?
Posted
技术标签:
【中文标题】如何使用 Keras 在 TensorBoard 中显示自定义图像?【英文标题】:How to display custom images in TensorBoard using Keras? 【发布时间】:2017-10-02 18:05:05 【问题描述】:我正在处理 Keras 中的一个分割问题,我想在每个训练周期结束时显示分割结果。
我想要类似于Tensorflow: How to Display Custom Images in Tensorboard (e.g. Matplotlib Plots) 的东西,但使用的是 Keras。我知道 Keras 有 TensorBoard
回调,但它似乎对此目的有限。
我知道这会破坏 Keras 后端抽象,但我还是有兴趣使用 TensorFlow 后端。
是否可以使用 Keras + TensorFlow 来实现?
【问题讨论】:
这不是你的答案,而是我有一个问题,你是否在关注 Keras 或 Tensorflow 上的图像分割教程?任何来源或参考都会有所帮助!提前致谢 @ShivamKotwalia 检查这个:github.com/jocicmarko/ultrasound-nerve-segmentation 嗨法比奥!你找到解决这个问题的方法了吗?我会对知道解决方案感兴趣。谢谢! @Rouky 并非如此,但可以使用回调将临时图像保存在目录中。当然 TensorBoard 解决方案会更好,但我在使用此解决方法后没有尝试,这对我来说已经足够了。 @Fabio 好的,谢谢!我找到了如何在 tensorboard 中显示图像,但是每个时期和每个预测都得到一条线(这很重要!)...而不是每张图像一条线,带有一个滑块来选择时期...还不完美... 【参考方案1】:所以,以下解决方案对我很有效:
import tensorflow as tf
def make_image(tensor):
"""
Convert an numpy representation image to Image protobuf.
Copied from https://github.com/lanpa/tensorboard-pytorch/
"""
from PIL import Image
height, width, channel = tensor.shape
image = Image.fromarray(tensor)
import io
output = io.BytesIO()
image.save(output, format='PNG')
image_string = output.getvalue()
output.close()
return tf.Summary.Image(height=height,
width=width,
colorspace=channel,
encoded_image_string=image_string)
class TensorBoardImage(keras.callbacks.Callback):
def __init__(self, tag):
super().__init__()
self.tag = tag
def on_epoch_end(self, epoch, logs=):
# Load image
img = data.astronaut()
# Do something to the image
img = (255 * skimage.util.random_noise(img)).astype('uint8')
image = make_image(img)
summary = tf.Summary(value=[tf.Summary.Value(tag=self.tag, image=image)])
writer = tf.summary.FileWriter('./logs')
writer.add_summary(summary, epoch)
writer.close()
return
tbi_callback = TensorBoardImage('Image Example')
只需将回调传递给fit
或fit_generator
。
请注意,您还可以在回调中使用model
运行一些操作。例如,您可以在某些图像上运行模型以检查其性能。
【讨论】:
太棒了!这大约是我最终做的事情(正如你所说,在回调中调用模型),但继承自现有的 keras Tensorboard 回调。事实上,为此目的创建另一个 tensorboard 回调是一个非常好的主意!感谢您花时间编写此代码,我相信它会对很多人有用。img = data.astronaut()
给我一个错误:data
究竟来自哪里?我正在尝试使用X_train
和Y_train
作为图像。
@payne 来自scikit-image.org/docs/dev/api/…。
@FábioPerez 我的问题更多是关于如何从on_epoch_end
方法中获取我的x_train
和y_train
数据(输入和相应的基本事实)。
model.something
:something
是什么?另外,如何将网络的输出作为图像获取?【参考方案2】:
根据以上答案和自己的搜索,我提供以下代码,在 Keras 中使用 TensorBoard 完成以下事情:
问题设置:预测双目立体匹配中的视差图; 向模型提供输入左图像
x
和地面实况视差图gt
;
在某个迭代时间显示输入x
和ground truth 'gt';
在某个迭代时间显示模型的输出y
。
首先,您必须使用Callback
制作您的装扮回调类。
Note
回调可以通过类属性 self.model
访问其关联模型。还有Note
:如果你想获取并显示模型的输出,你必须使用 feed_dict 将输入提供给模型。
from keras.callbacks import Callback
import numpy as np
from keras import backend as K
import tensorflow as tf
import cv2
# make the 1 channel input image or disparity map look good within this color map. This function is not necessary for this Tensorboard problem shown as above. Just a function used in my own research project.
def colormap_jet(img):
return cv2.cvtColor(cv2.applyColorMap(np.uint8(img), 2), cv2.COLOR_BGR2RGB)
class customModelCheckpoint(Callback):
def __init__(self, log_dir='./logs/tmp/', feed_inputs_display=None):
super(customModelCheckpoint, self).__init__()
self.seen = 0
self.feed_inputs_display = feed_inputs_display
self.writer = tf.summary.FileWriter(log_dir)
# this function will return the feeding data for TensorBoard visualization;
# arguments:
# * feed_input_display : [(input_yourModelNeed, left_image, disparity_gt ), ..., (input_yourModelNeed, left_image, disparity_gt), ...], i.e., the list of tuples of Numpy Arrays what your model needs as input and what you want to display using TensorBoard. Note: you have to feed the input to the model with feed_dict, if you want to get and display the output of your model.
def custom_set_feed_input_to_display(self, feed_inputs_display):
self.feed_inputs_display = feed_inputs_display
# copied from the above answers;
def make_image(self, numpy_img):
from PIL import Image
height, width, channel = numpy_img.shape
image = Image.fromarray(numpy_img)
import io
output = io.BytesIO()
image.save(output, format='PNG')
image_string = output.getvalue()
output.close()
return tf.Summary.Image(height=height, width=width, colorspace= channel, encoded_image_string=image_string)
# A callback has access to its associated model through the class property self.model.
def on_batch_end(self, batch, logs = None):
logs = logs or
self.seen += 1
if self.seen % 200 == 0: # every 200 iterations or batches, plot the costumed images using TensorBorad;
summary_str = []
for i in range(len(self.feed_inputs_display)):
feature, disp_gt, imgl = self.feed_inputs_display[i]
disp_pred = np.squeeze(K.get_session().run(self.model.output, feed_dict = self.model.input : feature), axis = 0)
#disp_pred = np.squeeze(self.model.predict_on_batch(feature), axis = 0)
summary_str.append(tf.Summary.Value(tag= 'plot/img0/'.format(i), image= self.make_image( colormap_jet(imgl)))) # function colormap_jet(), defined above;
summary_str.append(tf.Summary.Value(tag= 'plot/disp_gt/'.format(i), image= self.make_image( colormap_jet(disp_gt))))
summary_str.append(tf.Summary.Value(tag= 'plot/disp/'.format(i), image= self.make_image( colormap_jet(disp_pred))))
self.writer.add_summary(tf.Summary(value = summary_str), global_step =self.seen)
接下来,将此回调对象传递给您的模型的 fit_generator()
,例如:
feed_inputs_4_display = some_function_you_wrote()
callback_mc = customModelCheckpoint( log_dir = log_save_path, feed_inputd_display = feed_inputs_4_display)
# or
callback_mc.custom_set_feed_input_to_display(feed_inputs_4_display)
yourModel.fit_generator(... callbacks = callback_mc)
...
现在您可以运行代码,然后前往 TensorBoard 主机查看装扮的图像显示。例如,这是我使用上述代码得到的:
完成!享受吧!
【讨论】:
这个函数some_function_you_wrote()
的代码是什么。你是如何处理批次维度的?我收到一个错误,因为模型接受 4-D 张量而不是 3D-张量!对我来说,这个函数some_function_you_wrote()
返回一个 rgb 图像 3D 维度的 numpy 数组!
tf.Summary.Image(height=height, width=width, colorspace= channel, encoded_image_string=image_string)
在 Tensorflow 2 中会是什么?【参考方案3】:
我正在尝试将 matplotlib 图显示到张量板上(在绘制统计数据、热图等时很有用)。它也可以用于一般情况。
class AttentionLogger(keras.callbacks.Callback):
def __init__(self, val_data, logsdir):
super(AttentionLogger, self).__init__()
self.logsdir = logsdir # where the event files will be written
self.validation_data = val_data # validation data generator
self.writer = tf.summary.FileWriter(self.logsdir) # creating the summary writer
@tfmpl.figure_tensor
def attention_matplotlib(self, gen_images):
'''
Creates a matplotlib figure and writes it to tensorboard using tf-matplotlib
gen_images: The image tensor of shape (batchsize,width,height,channels) you want to write to tensorboard
'''
r, c = 5,5 # want to write 25 images as a 5x5 matplotlib subplot in TBD (tensorboard)
figs = tfmpl.create_figures(1, figsize=(15,15))
cnt = 0
for idx, f in enumerate(figs):
for i in range(r):
for j in range(c):
ax = f.add_subplot(r,c,cnt+1)
ax.set_yticklabels([])
ax.set_xticklabels([])
ax.imshow(gen_images[cnt]) # writes the image at index cnt to the 5x5 grid
cnt+=1
f.tight_layout()
return figs
def on_train_begin(self, logs=None): # when the training begins (run only once)
image_summary = [] # creating a list of summaries needed (can be scalar, images, histograms etc)
for index in range(len(self.model.output)): # self.model is accessible within callback
img_sum = tf.summary.image('img'.format(index), self.attention_matplotlib(self.model.output[index]))
image_summary.append(img_sum)
self.total_summary = tf.summary.merge(image_summary)
def on_epoch_end(self, epoch, logs = None): # at the end of each epoch run this
logs = logs or
x,y = next(self.validation_data) # get data from the generator
# get the backend session and sun the merged summary with appropriate feed_dict
sess_run_summary = K.get_session().run(self.total_summary, feed_dict = self.model.input: x['encoder_input'])
self.writer.add_summary(sess_run_summary, global_step =epoch) #finally write the summary!
那么你必须将它作为参数提供给fit/fit_generator
#val_generator is the validation data generator
callback_image = AttentionLogger(logsdir='./tensorboard', val_data=val_generator)
... # define the model and generators
# autoencoder is the model, note how callback is suppiled to fit_generator
autoencoder.fit_generator(generator=train_generator,
validation_data=val_generator,
callbacks=callback_image)
在我向张量板显示注意力图(作为热图)的情况下,这是输出。
【讨论】:
【参考方案4】:同样,您可能想试试tf-matplotlib。这是散点图
import tensorflow as tf
import numpy as np
import tfmpl
@tfmpl.figure_tensor
def draw_scatter(scaled, colors):
'''Draw scatter plots. One for each color.'''
figs = tfmpl.create_figures(len(colors), figsize=(4,4))
for idx, f in enumerate(figs):
ax = f.add_subplot(111)
ax.axis('off')
ax.scatter(scaled[:, 0], scaled[:, 1], c=colors[idx])
f.tight_layout()
return figs
with tf.Session(graph=tf.Graph()) as sess:
# A point cloud that can be scaled by the user
points = tf.constant(
np.random.normal(loc=0.0, scale=1.0, size=(100, 2)).astype(np.float32)
)
scale = tf.placeholder(tf.float32)
scaled = points*scale
# Note, `scaled` above is a tensor. Its being passed `draw_scatter` below.
# However, when `draw_scatter` is invoked, the tensor will be evaluated and a
# numpy array representing its content is provided.
image_tensor = draw_scatter(scaled, ['r', 'g'])
image_summary = tf.summary.image('scatter', image_tensor)
all_summaries = tf.summary.merge_all()
writer = tf.summary.FileWriter('log', sess.graph)
summary = sess.run(all_summaries, feed_dict=scale: 2.)
writer.add_summary(summary, global_step=0)
执行时,这会在 Tensorboard 内产生以下绘图
请注意,tf-matplotlib 负责评估任何张量输入,避免 pyplot
线程问题并支持运行时关键绘图的 blitting。
【讨论】:
【参考方案5】:我相信我找到了一种使用 tf-matplotlib 将此类自定义图像记录到 tensorboard 的更好方法。这是如何...
class TensorBoardDTW(tf.keras.callbacks.TensorBoard):
def __init__(self, **kwargs):
super(TensorBoardDTW, self).__init__(**kwargs)
self.dtw_image_summary = None
def _make_histogram_ops(self, model):
super(TensorBoardDTW, self)._make_histogram_ops(model)
tf.summary.image('dtw-cost', create_dtw_image(model.output))
只需覆盖 TensorBoard 回调类中的 _make_histogram_ops 方法即可添加自定义摘要。在我的例子中,create_dtw_image
是一个使用 tf-matplotlib 创建图像的函数。
问候,。
【讨论】:
【参考方案6】:这里是如何在图像上绘制地标的示例:
class CustomCallback(keras.callbacks.Callback):
def __init__(self, model, generator):
self.generator = generator
self.model = model
def tf_summary_image(self, tensor):
import io
from PIL import Image
tensor = tensor.astype(np.uint8)
height, width, channel = tensor.shape
image = Image.fromarray(tensor)
output = io.BytesIO()
image.save(output, format='PNG')
image_string = output.getvalue()
output.close()
return tf.Summary.Image(height=height,
width=width,
colorspace=channel,
encoded_image_string=image_string)
def on_epoch_end(self, epoch, logs=):
frames_arr, landmarks = next(self.generator)
# Take just 1st sample from batch
frames_arr = frames_arr[0:1,...]
y_pred = self.model.predict(frames_arr)
# Get last frame for which we have done predictions
img = frames_arr[0,-1,:,:]
img = img * 255
img = img[:, :, ::-1]
img = np.copy(img)
landmarks_gt = landmarks[-1].reshape(-1,2)
landmarks_pred = y_pred.reshape(-1,2)
img = draw_landmarks(img, landmarks_gt, (0,255,0))
img = draw_landmarks(img, landmarks_pred, (0,0,255))
image = self.tf_summary_image(img)
summary = tf.Summary(value=[tf.Summary.Value(image=image)])
writer = tf.summary.FileWriter('./logs')
writer.add_summary(summary, epoch)
writer.close()
return
【讨论】:
【参考方案7】:class customModelCheckpoint(Callback):
def __init__(self, log_dir='../logs/', feed_inputs_display=None):
super(customModelCheckpoint, self).__init__()
self.seen = 0
self.feed_inputs_display = feed_inputs_display
self.writer = tf.summary.FileWriter(log_dir)
def custom_set_feed_input_to_display(self, feed_inputs_display):
self.feed_inputs_display = feed_inputs_display
# A callback has access to its associated model through the class property self.model.
def on_batch_end(self, batch, logs = None):
logs = logs or
self.seen += 1
if self.seen % 8 == 0: # every 200 iterations or batches, plot the costumed images using TensorBorad;
summary_str = []
feature = self.feed_inputs_display[0][0]
disp_gt = self.feed_inputs_display[0][1]
disp_pred = self.model.predict_on_batch(feature)
summary_str.append(tf.summary.image('disp_input/'.format(self.seen), feature, max_outputs=4))
summary_str.append(tf.summary.image('disp_gt/'.format(self.seen), disp_gt, max_outputs=4))
summary_str.append(tf.summary.image('disp_pred/'.format(self.seen), disp_pred, max_outputs=4))
summary_st = tf.summary.merge(summary_str)
summary_s = K.get_session().run(summary_st)
self.writer.add_summary(summary_s, global_step=self.seen)
self.writer.flush()
然后你可以调用你的自定义回调并将图像写入 tensorboard
callback_mc = customModelCheckpoint(log_dir='../logs/', feed_inputs_display=[(a, b)])
callback_tb = TensorBoard(log_dir='../logs/', histogram_freq=0, write_graph=True, write_images=True)
callback = []
def data_gen(fr1, fr2):
while True:
hdr_arr = []
ldr_arr = []
for i in range(args['batch_size']):
try:
ldr = pickle.load(fr2)
hdr = pickle.load(fr1)
except EOFError:
fr1 = open(args['data_h_hdr'], 'rb')
fr2 = open(args['data_h_ldr'], 'rb')
hdr_arr.append(hdr)
ldr_arr.append(ldr)
hdr_h = np.array(hdr_arr)
ldr_h = np.array(ldr_arr)
gen = aug.flow(hdr_h, ldr_h, batch_size=args['batch_size'])
out = gen.next()
a = out[0]
b = out[1]
callback_mc.custom_set_feed_input_to_display(feed_inputs_display=[(a, b)])
yield [a, b]
callback.append(callback_tb)
callback.append(callback_mc)
H = model.fit_generator(data_gen(fr1, fr2), steps_per_epoch=100, epochs=args['epoch'], callbacks=callback)
picture
【讨论】:
【参考方案8】:这里和其他地方的现有答案是一个很好的起点,但我发现他们需要一些调整才能与 Tensorflow 2.x 和 keras flow_from_directory
* 一起使用。这就是我想出的。
我的目标是验证数据增强过程,因此我写入 tensorboard 的图像是增强的训练数据。这并不是 OP 想要的。他们必须将on_batch_end
更改为on_epoch_end
并访问模型输出(这是我没有研究过的,但我确信这是可能的。)
与Fabio Perez's answer with the astronaut 类似,您可以通过拖动橙色滑块滚动各个时期,显示已写入张量板的每个图像的不同增强副本。小心处理经过多个时期训练的大型数据集。由于此例程在每个 epoch 中每 1000 个图像保存一个副本,因此您最终可能会得到一个大的 tfevents 文件。
回调函数,保存为tensorflow_image_callback.py
import tensorflow as tf
import math
class TensorBoardImage(tf.keras.callbacks.Callback):
def __init__(self, logdir, train, validation=None):
super(TensorBoardImage, self).__init__()
self.logdir = logdir
self.train = train
self.validation = validation
self.file_writer = tf.summary.create_file_writer(logdir)
def on_batch_end(self, batch, logs):
images_or_labels = 0 #0=images, 1=labels
imgs = self.train[batch][images_or_labels]
#calculate epoch
n_batches_per_epoch = self.train.samples / self.train.batch_size
epoch = math.floor(self.train.total_batches_seen / n_batches_per_epoch)
#since the training data is shuffled each epoch, we need to use the index_array to find something which uniquely
#identifies the image and is constant throughout training
first_index_in_batch = batch * self.train.batch_size
last_index_in_batch = first_index_in_batch + self.train.batch_size
last_index_in_batch = min(last_index_in_batch, len(self.train.index_array))
img_indices = self.train.index_array[first_index_in_batch : last_index_in_batch]
#convert float to uint8, shift range to 0-255
imgs -= tf.reduce_min(imgs)
imgs *= 255 / tf.reduce_max(imgs)
imgs = tf.cast(imgs, tf.uint8)
with self.file_writer.as_default():
for ix,img in enumerate(imgs):
img_tensor = tf.expand_dims(img, 0) #tf.summary needs a 4D tensor
#only post 1 out of every 1000 images to tensorboard
if (img_indices[ix] % 1000) == 0:
#instead of img_filename, I could just use str(img_indices[ix]) as a unique identifier
#but this way makes it easier to find the unaugmented image
img_filename = self.train.filenames[img_indices[ix]]
tf.summary.image(img_filename, img_tensor, step=epoch)
将其与您的训练结合起来,如下所示:
train_augmentation = keras.preprocessing.image.ImageDataGenerator(rotation_range=20,
shear_range=10,
zoom_range=0.2,
width_shift_range=0.2,
height_shift_range=0.2,
brightness_range=[0.8, 1.2],
horizontal_flip=False,
vertical_flip=False
)
train_data_generator = train_augmentation.flow_from_directory(directory='/some/path/train/',
class_mode='categorical',
batch_size=batch_size,
shuffle=True
)
valid_augmentation = keras.preprocessing.image.ImageDataGenerator()
valid_data_generator = valid_augmentation.flow_from_directory(directory='/some/path/valid/',
class_mode='categorical',
batch_size=batch_size,
shuffle=False
)
tensorboard_log_dir = '/some/path'
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=tensorboard_log_dir, update_freq='batch')
tensorboard_image_callback = tensorflow_image_callback.TensorBoardImage(logdir=tensorboard_log_dir, train=train_data_generator, validation=valid_data_generator)
model.fit(x=train_data_generator,
epochs=n_epochs,
validation_data=valid_data_generator,
validation_freq=1,
callbacks=[
tensorboard_callback,
tensorboard_image_callback
])
*我后来意识到flow_from_directory
有一个选项save_to_dir
,这对我的目的来说已经足够了。简单地添加该选项要简单得多,但使用这样的回调具有在 Tensorboard 中显示图像的附加功能,可以比较同一图像的多个版本,并允许自定义保存的图像数量。 save_to_dir
保存每个增强图像的副本,这很快就会增加很多空间。
【讨论】:
以上是关于如何使用 Keras 在 TensorBoard 中显示自定义图像?的主要内容,如果未能解决你的问题,请参考以下文章
TensorFlow 2.0 Keras:如何为 TensorBoard 编写图像摘要
使用 tf.keras.Model.fit 进行训练时如何将自定义摘要添加到 tensorboard
TensorFlow 2.0 Keras:如何为TensorBoard编写图像摘要
使用 Keras,当我将 Tensorboard 回调添加到我的神经网络时,准确性会降低。我该如何解决?