如何抑制详细的 TensorFlow 日志记录? [复制]
Posted
技术标签:
【中文标题】如何抑制详细的 TensorFlow 日志记录? [复制]【英文标题】:How to suppress verbose Tensorflow logging? [duplicate] 【发布时间】:2016-10-30 14:48:31 【问题描述】:我正在使用鼻子测试对我的 Tensorflow 代码进行单元测试,但它会产生如此多的冗长输出,使其毫无用处。
下面的测试
import unittest
import tensorflow as tf
class MyTest(unittest.TestCase):
def test_creation(self):
self.assertEquals(True, False)
当使用nosetests
运行时会产生大量无用的日志记录:
FAIL: test_creation (tests.test_tf.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/cebrian/GIT/thesis-nilm/code/deepmodels/tests/test_tf.py", line 10, in test_creation
self.assertEquals(True, False)
AssertionError: True != False
-------------------- >> begin captured logging << --------------------
tensorflow: Level 1: Registering Const (<function _ConstantShape at 0x7f4379131c80>) in shape functions.
tensorflow: Level 1: Registering Assert (<function no_outputs at 0x7f43791319b0>) in shape functions.
tensorflow: Level 1: Registering Print (<function _PrintGrad at 0x7f4378effd70>) in gradient.
tensorflow: Level 1: Registering Print (<function unchanged_shape at 0x7f4379131320>) in shape functions.
tensorflow: Level 1: Registering HistogramAccumulatorSummary (None) in gradient.
tensorflow: Level 1: Registering HistogramSummary (None) in gradient.
tensorflow: Level 1: Registering ImageSummary (None) in gradient.
tensorflow: Level 1: Registering Audiosummary (None) in gradient.
tensorflow: Level 1: Registering MergeSummary (None) in gradient.
tensorflow: Level 1: Registering ScalarSummary (None) in gradient.
tensorflow: Level 1: Registering ScalarSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering MergeSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering AudioSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering ImageSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering HistogramSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering HistogramAccumulatorSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering Pack (<function _PackShape at 0x7f4378f047d0>) in shape functions.
tensorflow: Level 1: Registering Unpack (<function _UnpackShape at 0x7f4378f048c0>) in shape functions.
tensorflow: Level 1: Registering Concat (<function _ConcatShape at 0x7f4378f04938>) in shape functions.
tensorflow: Level 1: Registering ConcatOffset (<function _ConcatOffsetShape at 0x7f4378f049b0>) in shape functions.
......
而从 ipython 控制台使用 tensorflow 似乎并不那么冗长:
$ ipython
Python 2.7.11+ (default, Apr 17 2016, 14:00:29)
Type "copyright", "credits" or "license" for more information.
IPython 4.2.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
In [2]:
在运行鼻子测试时如何抑制以前的日志记录?
【问题讨论】:
另一种解决方案:***.com/questions/43337601/… 【参考方案1】:2.0 更新(2019 年 10 月 8 日)
设置TF_CPP_MIN_LOG_LEVEL
应该仍然有效(请参阅下面的 v0.12+ 更新),但目前存在一个问题(请参阅issue #31870)。如果设置 TF_CPP_MIN_LOG_LEVEL
对您不起作用(再次参见下文),请尝试执行以下操作来设置日志级别:
import tensorflow as tf
tf.get_logger().setLevel('INFO')
此外,请参阅tf.autograph.set_verbosity
上的文档,该文档设置了签名日志消息的详细程度 - 例如:
# Can also be set using the AUTOGRAPH_VERBOSITY environment variable
tf.autograph.set_verbosity(1)
v0.12+ 更新 (5/20/17),通过 TF 2.0+ 工作:
在 TensorFlow 0.12+ 中,根据 issue,您现在可以通过名为 TF_CPP_MIN_LOG_LEVEL
的环境变量控制日志记录;它默认为 0(显示所有日志),但可以在 Level
列下设置为以下值之一。
Level | Level for Humans | Level Description
-------|------------------|------------------------------------
0 | DEBUG | [Default] Print all messages
1 | INFO | Filter out INFO messages
2 | WARNING | Filter out INFO & WARNING messages
3 | ERROR | Filter out all messages
请参阅以下使用 Python 的通用操作系统示例:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # or any '0', '1', '2'
import tensorflow as tf
为了彻底,您还调用了设置 Python tf_logging
模块的级别,该模块用于例如摘要操作、张量板、各种估计器等。
# append to lines above
tf.logging.set_verbosity(tf.logging.ERROR) # or any DEBUG, INFO, WARN, ERROR, FATAL
对于 1.14,如果您不更改为使用 v1 API,您将收到警告,如下所示:
# append to lines above
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) # or any DEBUG, INFO, WARN, ERROR, FATAL
对于先前版本的 TensorFlow 或 TF-Learn Logging(v0.11.x 或更低版本):
查看以下页面以获取有关 TensorFlow 日志记录的信息;通过新的更新,您可以将日志记录详细程度设置为 DEBUG
、INFO
、WARN
、ERROR
或 FATAL
。例如:
tf.logging.set_verbosity(tf.logging.ERROR)
该页面还包含可与 TF-Learn 模型一起使用的监视器。 Here is the page.
不过,这不会阻止所有日志记录(仅限 TF-Learn)。我有两个解决方案;一个是“技术上正确”的解决方案 (Linux),另一个涉及重建 TensorFlow。
script -c 'python [FILENAME].py' | grep -v 'I tensorflow/'
其他的请看this answer,涉及修改源码和重建TensorFlow。
【讨论】:
在 TF 2.0 上,对我来说它的工作方式是:os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' tf.get_logger().setLevel('ERROR')【参考方案2】:使用nosetests --nologcapture
运行测试将禁用这些日志的显示。
有关鼻子测试日志记录的更多信息:
https://nose.readthedocs.io/en/latest/plugins/logcapture.html
【讨论】:
你拯救了我的岁月 :)【参考方案3】:这是 an example 这样做的。不幸的是,这需要修改源代码并重建。这是一个tracking bug,方便您使用
【讨论】:
以上是关于如何抑制详细的 TensorFlow 日志记录? [复制]的主要内容,如果未能解决你的问题,请参考以下文章
在 Python 日志记录模块 AKA 日志压缩中抑制具有相同内容的多条消息