Tensorflow 模型:如何从 proto buff 文件中识别输入/输出节点名称?
Posted
技术标签:
【中文标题】Tensorflow 模型:如何从 proto buff 文件中识别输入/输出节点名称?【英文标题】:Tensorflow model: How to identify input/output nodes name from a proto buff file? 【发布时间】:2019-05-15 22:28:33 【问题描述】:我正在使用 OpenAI 基线来训练 RL(deepq)模型。输入是 19 个特征:
observation_space = spaces.Box(0, 100, (19, 1), dtype=np.float_)
,输出为:
action_space =spaces.Discrete(6)
所有模型变量来自:
for i, var in enumerate(saver._var_list):
print('Var : '.format(i, var))
就像:
Var 0: <tf.Variable 'deepq/eps:0' shape=() dtype=float32_ref>
Var 1: <tf.Variable 'deepq/q_func/mlp_fc0/w:0' shape=(19, 64) dtype=float32_ref>
Var 2: <tf.Variable 'deepq/q_func/mlp_fc0/b:0' shape=(64,) dtype=float32_ref>
Var 3: <tf.Variable 'deepq/q_func/mlp_fc1/w:0' shape=(64, 64) dtype=float32_ref>
Var 4: <tf.Variable 'deepq/q_func/mlp_fc1/b:0' shape=(64,) dtype=float32_ref>
Var 5: <tf.Variable 'deepq/q_func/action_value/fully_connected/weights:0' shape=(64, 256) dtype=float32_ref>
Var 6: <tf.Variable 'deepq/q_func/action_value/fully_connected/biases:0' shape=(256,) dtype=float32_ref>
Var 7: <tf.Variable 'deepq/q_func/action_value/fully_connected_1/weights:0' shape=(256, 6) dtype=float32_ref>
Var 8: <tf.Variable 'deepq/q_func/action_value/fully_connected_1/biases:0' shape=(6,) dtype=float32_ref>
Var 9: <tf.Variable 'deepq/q_func/state_value/fully_connected/weights:0' shape=(64, 256) dtype=float32_ref>
Var 10: <tf.Variable 'deepq/q_func/state_value/fully_connected/biases:0' shape=(256,) dtype=float32_ref>
Var 11: <tf.Variable 'deepq/q_func/state_value/fully_connected_1/weights:0' shape=(256, 1) dtype=float32_ref>
Var 12: <tf.Variable 'deepq/q_func/state_value/fully_connected_1/biases:0' shape=(1,) dtype=float32_ref>
Var 13: <tf.Variable 'deepq/target_q_func/mlp_fc0/w:0' shape=(19, 64) dtype=float32_ref>
Var 14: <tf.Variable 'deepq/target_q_func/mlp_fc0/b:0' shape=(64,) dtype=float32_ref>
Var 15: <tf.Variable 'deepq/target_q_func/mlp_fc1/w:0' shape=(64, 64) dtype=float32_ref>
Var 16: <tf.Variable 'deepq/target_q_func/mlp_fc1/b:0' shape=(64,) dtype=float32_ref>
Var 17: <tf.Variable 'deepq/target_q_func/action_value/fully_connected/weights:0' shape=(64, 256) dtype=float32_ref>
Var 18: <tf.Variable 'deepq/target_q_func/action_value/fully_connected/biases:0' shape=(256,) dtype=float32_ref>
Var 19: <tf.Variable 'deepq/target_q_func/action_value/fully_connected_1/weights:0' shape=(256, 6) dtype=float32_ref>
Var 20: <tf.Variable 'deepq/target_q_func/action_value/fully_connected_1/biases:0' shape=(6,) dtype=float32_ref>
Var 21: <tf.Variable 'deepq/target_q_func/state_value/fully_connected/weights:0' shape=(64, 256) dtype=float32_ref>
Var 22: <tf.Variable 'deepq/target_q_func/state_value/fully_connected/biases:0' shape=(256,) dtype=float32_ref>
Var 23: <tf.Variable 'deepq/target_q_func/state_value/fully_connected_1/weights:0' shape=(256, 1) dtype=float32_ref>
Var 24: <tf.Variable 'deepq/target_q_func/state_value/fully_connected_1/biases:0' shape=(1,) dtype=float32_ref>
Var 25: <tf.Variable 'deepq_1/beta1_power:0' shape=() dtype=float32_ref>
Var 26: <tf.Variable 'deepq_1/beta2_power:0' shape=() dtype=float32_ref>
Var 27: <tf.Variable 'deepq/deepq/q_func/mlp_fc0/w/Adam:0' shape=(19, 64) dtype=float32_ref>
Var 28: <tf.Variable 'deepq/deepq/q_func/mlp_fc0/w/Adam_1:0' shape=(19, 64) dtype=float32_ref>
Var 29: <tf.Variable 'deepq/deepq/q_func/mlp_fc0/b/Adam:0' shape=(64,) dtype=float32_ref>
Var 30: <tf.Variable 'deepq/deepq/q_func/mlp_fc0/b/Adam_1:0' shape=(64,) dtype=float32_ref>
Var 31: <tf.Variable 'deepq/deepq/q_func/mlp_fc1/w/Adam:0' shape=(64, 64) dtype=float32_ref>
Var 32: <tf.Variable 'deepq/deepq/q_func/mlp_fc1/w/Adam_1:0' shape=(64, 64) dtype=float32_ref>
Var 33: <tf.Variable 'deepq/deepq/q_func/mlp_fc1/b/Adam:0' shape=(64,) dtype=float32_ref>
Var 34: <tf.Variable 'deepq/deepq/q_func/mlp_fc1/b/Adam_1:0' shape=(64,) dtype=float32_ref>
Var 35: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected/weights/Adam:0' shape=(64, 256) dtype=float32_ref>
Var 36: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected/weights/Adam_1:0' shape=(64, 256) dtype=float32_ref>
Var 37: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected/biases/Adam:0' shape=(256,) dtype=float32_ref>
Var 38: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected/biases/Adam_1:0' shape=(256,) dtype=float32_ref>
Var 39: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam:0' shape=(256, 6) dtype=float32_ref>
Var 40: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam_1:0' shape=(256, 6) dtype=float32_ref>
Var 41: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam:0' shape=(6,) dtype=float32_ref>
Var 42: <tf.Variable 'deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam_1:0' shape=(6,) dtype=float32_ref>
Var 43: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected/weights/Adam:0' shape=(64, 256) dtype=float32_ref>
Var 44: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected/weights/Adam_1:0' shape=(64, 256) dtype=float32_ref>
Var 45: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected/biases/Adam:0' shape=(256,) dtype=float32_ref>
Var 46: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected/biases/Adam_1:0' shape=(256,) dtype=float32_ref>
Var 47: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam:0' shape=(256, 1) dtype=float32_ref>
Var 48: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam_1:0' shape=(256, 1) dtype=float32_ref>
Var 49: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam:0' shape=(1,) dtype=float32_ref>
Var 50: <tf.Variable 'deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam_1:0' shape=(1,) dtype=float32_ref>
模型的输出保存使用
tf.train.write_graph(sess.graph_def, './model', 'my_deepq.pbtxt')
作为原始 buff 文件。 profo buff 文件如下所示。如何从这个 proto buff 文件中识别输入节点(层)和输出节点(层)的名称?谢谢!
node
name: "deepq/observation"
op: "Placeholder"
attr
key: "dtype"
value
type: DT_DOUBLE
attr
key: "shape"
value
shape
dim
size: -1
dim
size: 19
dim
size: 1
node
name: "deepq/ToFloat"
op: "Cast"
input: "deepq/observation"
attr
key: "DstT"
value
type: DT_FLOAT
attr
key: "SrcT"
value
type: DT_DOUBLE
attr
key: "Truncate"
value
b: false
:
:
node
name: "save/Assign_49"
op: "Assign"
input: "deepq_1/beta1_power"
input: "save/RestoreV2:49"
attr
key: "T"
value
type: DT_FLOAT
attr
key: "_class"
value
list
s: "loc:@deepq/q_func/action_value/fully_connected/biases"
attr
key: "use_locking"
value
b: true
attr
key: "validate_shape"
value
b: true
node
name: "save/Assign_50"
op: "Assign"
input: "deepq_1/beta2_power"
input: "save/RestoreV2:50"
attr
key: "T"
value
type: DT_FLOAT
attr
key: "_class"
value
list
s: "loc:@deepq/q_func/action_value/fully_connected/biases"
attr
key: "use_locking"
value
b: true
attr
key: "validate_shape"
value
b: true
:
:
node
name: "deepq_1/group_deps_1"
op: "NoOp"
input: "^deepq_1/Adam"
node
name: "deepq_1/group_deps_2"
op: "NoOp"
input: "^deepq_1/group_deps"
node
name: "deepq_1/group_deps_3"
op: "NoOp"
node
name: "init"
op: "NoOp"
input: "^deepq/deepq/q_func/action_value/fully_connected/biases/Adam/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected/biases/Adam_1/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected/weights/Adam/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected/weights/Adam_1/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam_1/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam_1/Assign"
input: "^deepq/deepq/q_func/mlp_fc0/b/Adam/Assign"
input: "^deepq/deepq/q_func/mlp_fc0/b/Adam_1/Assign"
input: "^deepq/deepq/q_func/mlp_fc0/w/Adam/Assign"
input: "^deepq/deepq/q_func/mlp_fc0/w/Adam_1/Assign"
input: "^deepq/deepq/q_func/mlp_fc1/b/Adam/Assign"
input: "^deepq/deepq/q_func/mlp_fc1/b/Adam_1/Assign"
input: "^deepq/deepq/q_func/mlp_fc1/w/Adam/Assign"
input: "^deepq/deepq/q_func/mlp_fc1/w/Adam_1/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected/biases/Adam/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected/biases/Adam_1/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected/weights/Adam/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected/weights/Adam_1/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam_1/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam_1/Assign"
input: "^deepq/eps/Assign"
input: "^deepq/q_func/action_value/fully_connected/biases/Assign"
input: "^deepq/q_func/action_value/fully_connected/weights/Assign"
input: "^deepq/q_func/action_value/fully_connected_1/biases/Assign"
input: "^deepq/q_func/action_value/fully_connected_1/weights/Assign"
input: "^deepq/q_func/mlp_fc0/b/Assign"
input: "^deepq/q_func/mlp_fc0/w/Assign"
input: "^deepq/q_func/mlp_fc1/b/Assign"
input: "^deepq/q_func/mlp_fc1/w/Assign"
input: "^deepq/q_func/state_value/fully_connected/biases/Assign"
input: "^deepq/q_func/state_value/fully_connected/weights/Assign"
input: "^deepq/q_func/state_value/fully_connected_1/biases/Assign"
input: "^deepq/q_func/state_value/fully_connected_1/weights/Assign"
input: "^deepq/target_q_func/action_value/fully_connected/biases/Assign"
input: "^deepq/target_q_func/action_value/fully_connected/weights/Assign"
input: "^deepq/target_q_func/action_value/fully_connected_1/biases/Assign"
input: "^deepq/target_q_func/action_value/fully_connected_1/weights/Assign"
input: "^deepq/target_q_func/mlp_fc0/b/Assign"
input: "^deepq/target_q_func/mlp_fc0/w/Assign"
input: "^deepq/target_q_func/mlp_fc1/b/Assign"
input: "^deepq/target_q_func/mlp_fc1/w/Assign"
input: "^deepq/target_q_func/state_value/fully_connected/biases/Assign"
input: "^deepq/target_q_func/state_value/fully_connected/weights/Assign"
input: "^deepq/target_q_func/state_value/fully_connected_1/biases/Assign"
input: "^deepq/target_q_func/state_value/fully_connected_1/weights/Assign"
input: "^deepq_1/beta1_power/Assign"
input: "^deepq_1/beta2_power/Assign"
node
name: "init_1"
op: "NoOp"
input: "^deepq/deepq/q_func/action_value/fully_connected/biases/Adam/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected/biases/Adam_1/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected/weights/Adam/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected/weights/Adam_1/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam_1/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam/Assign"
input: "^deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam_1/Assign"
input: "^deepq/deepq/q_func/mlp_fc0/b/Adam/Assign"
input: "^deepq/deepq/q_func/mlp_fc0/b/Adam_1/Assign"
input: "^deepq/deepq/q_func/mlp_fc0/w/Adam/Assign"
input: "^deepq/deepq/q_func/mlp_fc0/w/Adam_1/Assign"
input: "^deepq/deepq/q_func/mlp_fc1/b/Adam/Assign"
input: "^deepq/deepq/q_func/mlp_fc1/b/Adam_1/Assign"
input: "^deepq/deepq/q_func/mlp_fc1/w/Adam/Assign"
input: "^deepq/deepq/q_func/mlp_fc1/w/Adam_1/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected/biases/Adam/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected/biases/Adam_1/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected/weights/Adam/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected/weights/Adam_1/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam_1/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam/Assign"
input: "^deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam_1/Assign"
input: "^deepq/eps/Assign"
input: "^deepq/q_func/action_value/fully_connected/biases/Assign"
input: "^deepq/q_func/action_value/fully_connected/weights/Assign"
input: "^deepq/q_func/action_value/fully_connected_1/biases/Assign"
input: "^deepq/q_func/action_value/fully_connected_1/weights/Assign"
input: "^deepq/q_func/mlp_fc0/b/Assign"
input: "^deepq/q_func/mlp_fc0/w/Assign"
input: "^deepq/q_func/mlp_fc1/b/Assign"
input: "^deepq/q_func/mlp_fc1/w/Assign"
input: "^deepq/q_func/state_value/fully_connected/biases/Assign"
input: "^deepq/q_func/state_value/fully_connected/weights/Assign"
input: "^deepq/q_func/state_value/fully_connected_1/biases/Assign"
input: "^deepq/q_func/state_value/fully_connected_1/weights/Assign"
input: "^deepq/target_q_func/action_value/fully_connected/biases/Assign"
input: "^deepq/target_q_func/action_value/fully_connected/weights/Assign"
input: "^deepq/target_q_func/action_value/fully_connected_1/biases/Assign"
input: "^deepq/target_q_func/action_value/fully_connected_1/weights/Assign"
input: "^deepq/target_q_func/mlp_fc0/b/Assign"
input: "^deepq/target_q_func/mlp_fc0/w/Assign"
input: "^deepq/target_q_func/mlp_fc1/b/Assign"
input: "^deepq/target_q_func/mlp_fc1/w/Assign"
input: "^deepq/target_q_func/state_value/fully_connected/biases/Assign"
input: "^deepq/target_q_func/state_value/fully_connected/weights/Assign"
input: "^deepq/target_q_func/state_value/fully_connected_1/biases/Assign"
input: "^deepq/target_q_func/state_value/fully_connected_1/weights/Assign"
input: "^deepq_1/beta1_power/Assign"
input: "^deepq_1/beta2_power/Assign"
node
name: "init_2"
op: "NoOp"
node
name: "save/filename/input"
op: "Const"
attr
key: "dtype"
value
type: DT_STRING
attr
key: "value"
value
tensor
dtype: DT_STRING
tensor_shape
string_val: "model"
node
name: "save/filename"
op: "PlaceholderWithDefault"
input: "save/filename/input"
attr
key: "dtype"
value
type: DT_STRING
attr
key: "shape"
value
shape
node
name: "save/Const"
op: "PlaceholderWithDefault"
input: "save/filename"
attr
key: "dtype"
value
type: DT_STRING
attr
key: "shape"
value
shape
node
name: "save/SaveV2/tensor_names"
op: "Const"
attr
key: "dtype"
value
type: DT_STRING
attr
key: "value"
value
tensor
dtype: DT_STRING
tensor_shape
dim
size: 51
string_val: "deepq/deepq/q_func/action_value/fully_connected/biases/Adam"
string_val: "deepq/deepq/q_func/action_value/fully_connected/biases/Adam_1"
string_val: "deepq/deepq/q_func/action_value/fully_connected/weights/Adam"
string_val: "deepq/deepq/q_func/action_value/fully_connected/weights/Adam_1"
string_val: "deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam"
string_val: "deepq/deepq/q_func/action_value/fully_connected_1/biases/Adam_1"
string_val: "deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam"
string_val: "deepq/deepq/q_func/action_value/fully_connected_1/weights/Adam_1"
string_val: "deepq/deepq/q_func/mlp_fc0/b/Adam"
string_val: "deepq/deepq/q_func/mlp_fc0/b/Adam_1"
string_val: "deepq/deepq/q_func/mlp_fc0/w/Adam"
string_val: "deepq/deepq/q_func/mlp_fc0/w/Adam_1"
string_val: "deepq/deepq/q_func/mlp_fc1/b/Adam"
string_val: "deepq/deepq/q_func/mlp_fc1/b/Adam_1"
string_val: "deepq/deepq/q_func/mlp_fc1/w/Adam"
string_val: "deepq/deepq/q_func/mlp_fc1/w/Adam_1"
string_val: "deepq/deepq/q_func/state_value/fully_connected/biases/Adam"
string_val: "deepq/deepq/q_func/state_value/fully_connected/biases/Adam_1"
string_val: "deepq/deepq/q_func/state_value/fully_connected/weights/Adam"
string_val: "deepq/deepq/q_func/state_value/fully_connected/weights/Adam_1"
string_val: "deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam"
string_val: "deepq/deepq/q_func/state_value/fully_connected_1/biases/Adam_1"
string_val: "deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam"
string_val: "deepq/deepq/q_func/state_value/fully_connected_1/weights/Adam_1"
string_val: "deepq/eps"
string_val: "deepq/q_func/action_value/fully_connected/biases"
string_val: "deepq/q_func/action_value/fully_connected/weights"
string_val: "deepq/q_func/action_value/fully_connected_1/biases"
string_val: "deepq/q_func/action_value/fully_connected_1/weights"
string_val: "deepq/q_func/mlp_fc0/b"
string_val: "deepq/q_func/mlp_fc0/w"
string_val: "deepq/q_func/mlp_fc1/b"
string_val: "deepq/q_func/mlp_fc1/w"
string_val: "deepq/q_func/state_value/fully_connected/biases"
string_val: "deepq/q_func/state_value/fully_connected/weights"
string_val: "deepq/q_func/state_value/fully_connected_1/biases"
string_val: "deepq/q_func/state_value/fully_connected_1/weights"
string_val: "deepq/target_q_func/action_value/fully_connected/biases"
string_val: "deepq/target_q_func/action_value/fully_connected/weights"
string_val: "deepq/target_q_func/action_value/fully_connected_1/biases"
string_val: "deepq/target_q_func/action_value/fully_connected_1/weights"
string_val: "deepq/target_q_func/mlp_fc0/b"
string_val: "deepq/target_q_func/mlp_fc0/w"
string_val: "deepq/target_q_func/mlp_fc1/b"
string_val: "deepq/target_q_func/mlp_fc1/w"
string_val: "deepq/target_q_func/state_value/fully_connected/biases"
string_val: "deepq/target_q_func/state_value/fully_connected/weights"
string_val: "deepq/target_q_func/state_value/fully_connected_1/biases"
string_val: "deepq/target_q_func/state_value/fully_connected_1/weights"
string_val: "deepq_1/beta1_power"
string_val: "deepq_1/beta2_power"
node
name: "save/SaveV2/shape_and_slices"
op: "Const"
attr
key: "dtype"
value
type: DT_STRING
attr
key: "value"
value
tensor
dtype: DT_STRING
tensor_shape
dim
size: 51
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
string_val: ""
:
:
node
name: "save/restore_all"
op: "NoOp"
input: "^save/Assign"
input: "^save/Assign_1"
input: "^save/Assign_10"
input: "^save/Assign_11"
input: "^save/Assign_12"
input: "^save/Assign_13"
input: "^save/Assign_14"
input: "^save/Assign_15"
input: "^save/Assign_16"
input: "^save/Assign_17"
input: "^save/Assign_18"
input: "^save/Assign_19"
input: "^save/Assign_2"
input: "^save/Assign_20"
input: "^save/Assign_21"
input: "^save/Assign_22"
input: "^save/Assign_23"
input: "^save/Assign_24"
input: "^save/Assign_25"
input: "^save/Assign_26"
input: "^save/Assign_27"
input: "^save/Assign_28"
input: "^save/Assign_29"
input: "^save/Assign_3"
input: "^save/Assign_30"
input: "^save/Assign_31"
input: "^save/Assign_32"
input: "^save/Assign_33"
input: "^save/Assign_34"
input: "^save/Assign_35"
input: "^save/Assign_36"
input: "^save/Assign_37"
input: "^save/Assign_38"
input: "^save/Assign_39"
input: "^save/Assign_4"
input: "^save/Assign_40"
input: "^save/Assign_41"
input: "^save/Assign_42"
input: "^save/Assign_43"
input: "^save/Assign_44"
input: "^save/Assign_45"
input: "^save/Assign_46"
input: "^save/Assign_47"
input: "^save/Assign_48"
input: "^save/Assign_49"
input: "^save/Assign_5"
input: "^save/Assign_50"
input: "^save/Assign_6"
input: "^save/Assign_7"
input: "^save/Assign_8"
input: "^save/Assign_9"
versions
producer: 27
【问题讨论】:
【参考方案1】:我无法提供完整的答案。
然而,要识别输入节点,您通常要搜索具有“占位符”操作的节点。可能不止一个,尤其是在使用复杂的优化器或 drop-out 层时。
当谈到输出时,它甚至是值得的:每个节点本质上都是一个输出。我可以在这里推荐 2 个选项:寻找具有预期输出名称的操作,例如 softmax。或者解析整个文件定义并找到终端节点。未用于任何其他操作的节点。
另一种选择是加载图形,保存检查点并在 tensorboard 中研究它。
我不知道有一种编程工具能够很好地表示 Python 查询的 tf.graph。
如果您愿意生成 .pb 文件,可以使用此答案的帮助:Given a tensor flow model graph, how to find the input node and output node names
【讨论】:
以上是关于Tensorflow 模型:如何从 proto buff 文件中识别输入/输出节点名称?的主要内容,如果未能解决你的问题,请参考以下文章
极智AI | caffe proto 校验模型结构 prototxt 讲解
如何从 python 中的预训练模型中获取权重并在 tensorflow 中使用它?