Tensorflow Inception 批量分类在每次迭代时变慢

Posted

技术标签:

【中文标题】Tensorflow Inception 批量分类在每次迭代时变慢【英文标题】:Tensorflow Inception batch classification slower at each iteration 【发布时间】:2017-09-19 17:48:47 【问题描述】:

我重新训练了 Inception 的最后一层,并使用来自 tensorflow.com 的 this tutorial 在我自己的类别上重新训练它。我是 Tensorflow 的初学者,我的目标是为工作中的项目分类 30.000 张图片。

将最后一层重新训练为我自己的标签后,我抓取了大约 20 张看不见的图片并将它们(完整文件路径)添加到 pandas 数据框。接下来,我将数据帧中的每张图片输入图像分类器,分类后,将相应的最高预测标签和可靠性得分添加到同一行的另外两列。

为了将图片提供给分类器,我使用了 df.iterrows()、df.apply(function) 以及 3 个单独的硬编码文件路径(请参见下面的代码,我将它们留下注释)。然而,我发现无论我喂图片的方式如何,对图片进行分类每次迭代都需要更长的时间。 Pic[0] 开始时的分类时间为 2.2 秒,但到了 Pic[19],这已增加到 23 秒。想象一下 pic 10.000、20.000 等需要多长时间。此外,在对文件进行分类时,cpu 和内存使用量也会缓慢增加,尽管它们并没有显着增加。

请看下面我的代码(大部分,保存pandas和分类激活部分,取自上面tensorflow教程中提到的this示例)。

import os
import tensorflow as tf, sys
import pandas as pd
import gc
import numpy as np
import tensorflow as tf
import time
import psutil    


modelFullPath = '/Users/jaap/tf_files/retrained_graph.pb'
labelsFullPath = '/Users/jaap/tf_files/retrained_labels.txt'    

def create_graph():
    """Creates a graph from saved GraphDef file and returns a saver."""
    # Creates graph from saved graph_def.pb.
    with tf.gfile.FastGFile(modelFullPath, 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        _ = tf.import_graph_def(graph_def, name='')    


def run_inference_on_image(image):
    answer = None
    imagePath = image
    print imagePath
    if not tf.gfile.Exists(imagePath):
        tf.logging.fatal('File does not exist %s', imagePath)
        return answer    

    image_data = tf.gfile.FastGFile(imagePath, 'rb').read()    

    # Creates graph from saved GraphDef.
    create_graph()    

    with tf.Session() as sess:    

        softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
        predictions = sess.run(softmax_tensor,
                               'DecodeJpeg/contents:0': image_data)
        predictions = np.squeeze(predictions)    

        top_k = predictions.argsort()[-5:][::-1]  # Getting top 5 predictions
        f = open(labelsFullPath, 'rb')
        lines = f.readlines()
        labels = [str(w).replace("\n", "") for w in lines]
        for node_id in top_k:
            human_string = labels[node_id]
            score = predictions[node_id]
            print('%s (score = %.5f)' % (human_string, score))
            return human_string, score    


werkmap = '/Users/jaap/tf_files/test/'
filelist = []
files_in_dir = os.listdir('/Users/jaap/tf_files/test/')
for f in files_in_dir:
    if f != '.DS_Store':
        filelist.append(werkmap+f)    

df = pd.DataFrame(filelist, index=None, columns=['Pics'])
df = df.drop_duplicates()
df['Class'] = ''
df['Reliability'] = ''    

print(df)    


#--------------------------------------------------------
for index, pic in df.iterrows():
    start = time.time()
    df['Class'][index] = run_inference_on_image(pic[0])
    stop = time.time()
    duration = stop - start
    print("duration = %s" % duration)
    print("cpu usage: %s" % psutil.cpu_percent())
    print("memory usage: %s " % psutil.virtual_memory())
    print("")

df['Class'] = df['Class'].astype(str)
df['Class'], df['Reliability'] = df['Class'].str.split(',', 1).str    

#-------------------------------------------------        

# df['Class'] = df['Pics'].apply(run_inference_on_image)
# df['Class'] = df['Class'].astype(str)
# df['Class'], df['Reliability'] = df['Class'].str.split(',', 1).str
# print(df)    

#--------------------------------------------------------------
# start = time.time()
# ja = run_inference_on_image('/Users/jaap/tf_files/test/12345_1.jpg')
# stop = time.time()
# duration = stop - start
# print("duration = %s" % duration)  

# start = time.time()
# ja = run_inference_on_image('/Users/jaap/tf_files/test/12345_2.jpg')
# stop = time.time()
# duration = stop - start
# print("duration = %s" % duration)    

# start = time.time()
# ja = run_inference_on_image('/Users/jaap/tf_files/test/12345_3.jpg')
# stop = time.time()
# duration = stop - start
# print("duration = %s" % duration)    

感谢您的帮助!

【问题讨论】:

通过每张图片的shell脚本调用python脚本,绕过了这个问题。分类时间现在稳定在约 2.5 秒。 Python 似乎不断将分类信息添加到内存中,从而使每次迭代的脚本变得更加庞大。 【参考方案1】:

您似乎正在为每个推理创建整个图表。这应该使它变慢。相反,您可以执行以下操作:

with tf.Graph().as_default():
  create_graph()
  with tf.Session() as sess:
    for index, pic in df.iterrows():
      start = time.time()
      df['Class'][index] = run_inference_on_image(pic[0], sess)
      stop = time.time()

【讨论】:

以上是关于Tensorflow Inception 批量分类在每次迭代时变慢的主要内容,如果未能解决你的问题,请参考以下文章

tensorflow 1.0 学习:用别人训练好的模型来进行图像分类

Tensorflow inception-V3 Re-Train 多层

TensorFlow(十七):训练自己的图片分类模型

Tensorflow学习(练习)—下载骨骼图像识别网络inception数据集

TensorFlow(十四):谷歌图像识别网络inception-v3下载与查看结构

将预训练的 inception_resnet_v2 与 Tensorflow 结合使用