Tensorflow:加载大数据的现代方式
Posted
技术标签:
【中文标题】Tensorflow:加载大数据的现代方式【英文标题】:Tensorflow: Modern way to load large data 【发布时间】:2020-01-03 03:12:08 【问题描述】:我想使用 numpy 数组作为输入数据来训练卷积神经网络(使用 Tensorflow 版本 1.13 中的 tf.keras)。训练数据(我目前存储在一个 >30GB 的“.npz”文件中)不能一次全部放入 RAM。 将大型数据集保存并加载到神经网络中进行训练的最佳方法是什么?由于我没有找到这个(肯定无处不在?)问题的好答案,所以我希望在这里听到一个。非常感谢您的任何帮助!
来源
类似的问题似乎被问过很多次(例如training-classifier-from-tfrecords-in-tensorflow、tensorflow-synchronize-readings-from-tfrecord、how-to-load-data-parallelly-in-tensorflow),但已经有好几年了,而且通常没有确定的答案。
我目前的理解是使用 TFRecord 文件是解决此问题的好方法。到目前为止,我发现解释如何在 keras 中使用 TFRecord 文件的最有希望的教程是 medium.com。其他有用的来源是 machinelearninguru.com 和 medium.com_source2 以及它们的来源。
官方 tensorflow 文档和教程(tf.data.Dataset、Importing Data、tf_records 等)对我没有帮助。特别是,其中给出的几个示例即使未经修改也不适合我。
我尝试使用 TFRecord 文件
我假设 TFRecords 是解决我的问题的好方法,但我很难使用它们。这是我根据教程medium.com 制作的示例。我尽可能地精简了代码。
# python 3.6, tensorflow 1.13.
# Adapted from https://medium.com/@moritzkrger/speeding-up-keras-with-tfrecord-datasets-5464f9836c36
import tensorflow as tf
import numpy as np
from tensorflow.python import keras as keras
# Helper functions (see also https://www.tensorflow.org/tutorials/load_data/tf_records)
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def writeTFRecords():
number_of_samples = 100 # create some random data to play with
images, labels = (np.random.sample((number_of_samples, 256, 256, 1)), np.random.randint(0, 30, number_of_samples))
writer = tf.python_io.TFRecordWriter("bla.tfrecord")
for index in range(images.shape[0]):
image = images[index]
label = labels[index]
feature = 'image': _bytes_feature(tf.compat.as_bytes(image.tostring())),
'label': _int64_feature(int(label))
example = tf.train.Example(features=tf.train.Features(feature=feature))
writer.write(example.SerializeToString())
writer.close()
def loadTFRecord(data_path):
with tf.Session() as sess:
feature = 'train/image': tf.FixedLenFeature([], tf.string),
'train/label': tf.FixedLenFeature([], tf.int64)
# Create a list of filenames and pass it to a queue
filename_queue = tf.train.string_input_producer([data_path], num_epochs=1)
# Define a reader and read the next record
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
# Decode the record read by the reader
features = tf.parse_single_example(serialized_example, features=feature)
# Convert the image data from string back to the numbers
image = tf.decode_raw(features['train/image'], tf.float32)
# Cast label data into int32
label = tf.cast(features['train/label'], tf.int32)
# Reshape image data into the original shape
image = tf.reshape(image, [256, 256, 1])
return image, label # I'm not 100% sure that's how this works...
# ######### generate a TFRecords file in the working directory containing random data. #################################
writeTFRecords()
# ######## Load the TFRecords file and use it to train a simple example neural network. ################################
image, label = loadTFRecord("bla.tfrecord")
model_input = keras.layers.Input(tensor=image)
model_output = keras.layers.Flatten(input_shape=(-1, 256, 256, 1))(model_input)
model_output = keras.layers.Dense(16, activation='relu')(model_output)
train_model = keras.models.Model(inputs=model_input, outputs=model_output)
train_model.compile(optimizer=keras.optimizers.RMSprop(lr=0.0001),
loss='mean_squared_error',
target_tensors=[label])
print("\n \n start training \n \n") # Execution gets stuck on fitting
train_model.fit(epochs=1, steps_per_epoch=10) # no output or error messages.
代码创建一个 TFRecord 文件并开始拟合,然后卡在没有输出或错误消息。我不知道问题是什么,也不知道如何解决。
【问题讨论】:
好久没用TF了,补充一下。看看 TF 的批处理/流水线/ETL (tensorflow.org/guide/performance/datasets)。显然,该数据集为 TF 的图提供了足够小的批次来运行,并在后台从磁盘预取数据。 还没有解决方案? @Vimieiro 我发布了一个答案,展示了我当时最终用于该项目的方法(TFRecord 文件和 tensorflow 数据集)的最小示例。 【参考方案1】:虽然这不是对原始问题的真正答案(即“在大型数据集上训练的最佳方法是什么”),但我设法让 tfrecords 和数据集工作。特别有帮助的是这个tutorial on YouTube。我为任何遇到相同问题的人提供了一个带有工作代码的最小示例。
# Developed using python 3.6, tensorflow 1.14.0.
# This code writes data (pairs (label, image) where label is int64 and image is np.ndarray) into .tfrecord files and
# uses them for training a simple neural network. It is meant as a minimal working example of how to use tfrecords. This
# solution is likely not optimal. If you know how to improve it, please comment on
# https://***.com/q/57717004/9988487. Refer to links therein for further information.
import tensorflow as tf
import numpy as np
from tensorflow.python import keras as keras
# Helper functions (see also https://www.tensorflow.org/tutorials/load_data/tf_records)
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def write_tfrecords_file(out_path: str, images: np.ndarray, labels: np.ndarray) -> None:
"""Write all image-label pairs into a single .tfrecord file.
:param out_path: File path of the .tfrecord file to generate or overwrite.
:param images: array with first dimension being the image index. Every images[i].tostring() is
serialized and written into the file as 'image': wrap_bytes(img_bytes)
:param labels: 1d array of integers. labels[i] is the label of images[i]. Written as 'label': wrap_int64(label)"""
assert len(images) == len(labels)
with tf.io.TFRecordWriter(out_path) as writer: # could use writer_options parameter to enable compression
for i in range(len(labels)):
img_bytes = images[i].tostring() # Convert the image to raw bytes.
label = labels[i]
data = 'image': _bytes_feature(img_bytes), 'label': _int64_feature(label)
feature = tf.train.Features(feature=data) # Wrap the data as TensorFlow Features.
example = tf.train.Example(features=feature) # Wrap again as a TensorFlow Example.
serialized = example.SerializeToString() # Serialize the data.
writer.write(serialized) # Write the serialized data to the TFRecords file.
def parse_example(serialized, shape=(256, 256, 1)):
features = 'image': tf.io.FixedLenFeature([], tf.string), 'label': tf.io.FixedLenFeature([], tf.int64)
# Parse the serialized data so we get a dict with our data.
parsed_example = tf.io.parse_single_example(serialized=serialized, features=features)
label = parsed_example['label']
image_raw = parsed_example['image'] # Get the image as raw bytes.
image = tf.decode_raw(image_raw, tf.float32) # Decode the raw bytes so it becomes a tensor with type.
image = tf.reshape(image, shape=shape)
return image, label # this function will be called once (to add it to tf graph; then parse images individually)
# create some arbitrary data to play with: 1000 images sized 256x256 with one colour channel. Use your custom np-arrays
IMAGE_WIDTH, NUM_OF_IMAGES, NUM_OF_CLASSES, COLOUR_CHANNELS = 256, 10_000, 10, 1
# using float32 to save memory. Must match type in parse_example(), tf.decode_raw(image_raw, tf.float32)
features_train = np.random.sample((NUM_OF_IMAGES, IMAGE_WIDTH, IMAGE_WIDTH, COLOUR_CHANNELS)).astype(np.float32)
labels_train = np.random.randint(low=0, high=NUM_OF_CLASSES, size=NUM_OF_IMAGES) # one random label for each image
features_eval = features_train[:200] # use the first 200 images as evaluation data for simplicity.
labels_eval = labels_train[:200]
write_tfrecords_file("train.tfrecord", features_train, labels_train) # normal: split the data files of several GB each
write_tfrecords_file("eval.tfrecord", features_eval, labels_eval) # this may take a while. Consider a progressbar
# The files are complete. Now define a model and use datasets to feed the data from the .tfrecord files into the model.
model = keras.Sequential([keras.layers.Flatten(input_shape=(256, 256, 1)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Check docs for parameters (compression, buffer size, thread count. Also www.tensorflow.org/guide/performance/datasets
train_dataset = tf.data.TFRecordDataset("train.tfrecord") # specify a list (or dataset) of file names for large data
train_dataset = train_dataset.map(parse_example) # parse tfrecords. Parameter num_parallel_calls may help performance.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
validation_dataset = tf.data.TFRecordDataset("eval.tfrecord")
validation_dataset = validation_dataset.map(parse_example).batch(64)
model.fit(train_dataset, epochs=3)
# evaluate the results
results = model.evaluate(validation_dataset)
print('\n\nvalidation loss, validation acc:', results)
请注意,将 some_keras_model.fit(..., validation_data=some_dataset) 与数据集对象一起使用很棘手。它可能导致
TypeError: 'DatasetV1Adapter' object does not support indexing
。
这似乎是一个错误(参见 github.com/tensorflow/tensorflow/issues/28995),据说从 tf-nightly 版本“1.15.0-dev20190808”开始修复; official tutorial 也使用它,尽管它在大多数版本中不起作用。一个简单但肮脏的修复方法是使用 verbose=0 (仅抑制程序输出)并使用 tensorboard 绘制验证结果。另见Keras model.fit() with tf.dataset API + validation_data。
【讨论】:
以上是关于Tensorflow:加载大数据的现代方式的主要内容,如果未能解决你的问题,请参考以下文章
MobileNet实战:tensorflow2.X版本,MobileNetV1图像分类任务(大数据集)
windows server 2016 中导入 tensorflow 错误(DLL 加载失败导入 _pywrap_tensorflow_internal)
谷歌昨夜放深度学习大招!三款硬件产品TensorFlow六大重磅升级
tensorflow:无法加载动态库“cudart64_110.dll”; dlerror: 未找到 cudart64_110.dll