为啥 tensorflow 模块会占用所有 GPU 内存? [复制]
Posted
技术标签:
【中文标题】为啥 tensorflow 模块会占用所有 GPU 内存? [复制]【英文标题】:Why tensorflow modules taking up all the GPU memory? [duplicate]为什么 tensorflow 模块会占用所有 GPU 内存? [复制] 【发布时间】:2021-03-13 04:54:30 【问题描述】:我正在 TensorFlow 2 上训练 U-net。当我加载模型时,它几乎占用了 GPU 的所有内存(22 GB 超过 26 GB),尽管我的模型应该最多占用 1.5 GB具有 1.9 亿个参数的内存。为了理解这个问题,我尝试加载一个没有任何层的模型,令我惊讶的是它仍然占用了相同数量的内存。我的模型的代码附在下面:
x = tf.keras.layers.Input(shape=(256,256,1))
model = Sequential(
[
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Activation('relu')(Add()([conv5_0, conv5_2])),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(2048, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(2048, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(2048, 3, padding = 'same', kernel_initializer = 'he_normal'),
UpSampling2D(size = (2,2)),
Conv2D(1024, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
UpSampling2D(size = (2,2)),
Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
UpSampling2D(size = (2,2)),
Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
UpSampling2D(size = (2,2)),
Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
UpSampling2D(size = (2,2)),
Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal'),
Conv2D(1, 3, activation = 'linear', padding = 'same', kernel_initializer = 'he_normal')
])
y = model(x)
我注释掉了所有层,它仍然占用 22 GB。我正在使用 jupyter-notebook 来运行代码。我认为在我的 jupyter notebook 的开头添加tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction=x)
可以解决问题,但它没有。我的目标是在 GPU 上同时运行多个脚本,以更有效地利用我的时间。任何帮助将非常感激。谢谢。
注意:刚刚注意到它不仅发生在这段代码中,而且发生在任何其他 Tensorflow 模块上。例如,在我的代码的某个时刻,我在加载模型之前使用了tf.signal.ifft2
,它也占用了与模型几乎相同的内存。如何解决这个问题?
【问题讨论】:
***.com/questions/34199233/… 【参考方案1】:你可以像这样动态分配内存:
from keras.backend.tensorflow_backend import set_session
config=tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
set_session(sess)
【讨论】:
【参考方案2】:您需要限制 GPU 内存增长,您可以在 TensorFlow page 上找到示例代码
我也复制了 sn-p 代码:
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only use the first GPU
try:
f.config.experimental.set_visible_devices(gpus[0], 'GPU')
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")
except RuntimeError as e:
# Visible devices must be set before GPUs have been initialized
print(e)
我在一些项目中遇到了同样的问题,我注意到如果批量大小很大,那么 GPU 内存就会出现问题。尝试将批量大小设置得尽可能小。当模型很复杂时,我从批量大小 1 开始。
【讨论】:
【参考方案3】:更多讨论可以在https://www.tensorflow.org/guide/gpu找到,你应该阅读它。
【讨论】:
以上是关于为啥 tensorflow 模块会占用所有 GPU 内存? [复制]的主要内容,如果未能解决你的问题,请参考以下文章
为啥带有 TensorFlow 的 Keras 没有使用所有 GPU 内存