如何从特定层获取输出张量?

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了如何从特定层获取输出张量?相关的知识,希望对你有一定的参考价值。

我想弄清楚,如果是可能从一个特定的层使用 tensorflow lite for android 环境的输出.目前我知道,使用。'interpreter.run()'我们可以得到 "标准 "的输出,但这不是我想要的东西。

答案

@Simo 我会在这里写一个这个问题的变通方法。如何保存你想要的模型的一部分到.tflite文件。让我自己解释一下。而不是做下面的工作,并保存整个模型。

# WHOLE MODEL
tflite_model = tf.keras.models.load_model('face_recog.weights.best.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(tflite_model)
tflite_save = converter.convert()
open("face_recog.tflite", "wb").write(tflite_save)

您可以打印keras模型的层数。

print([layer.name for layer in keras_model.layers])
Output:
['anchor', 'positive', 'negative', 'model', 'lambda']

print([layer.name for layer in keras_model.get_layer('model').layers])
Output:  
['input_1', 'Conv1_pad', 'Conv1', 'bn_Conv1', 'Conv1_relu', 'expanded_conv_depthwise', 'expanded_conv_depthwise_BN', 'expanded_conv_depthwise_relu', 'expanded_conv_project', 'expanded_conv_project_BN', 'block_1_expand', 'block_1_expand_BN', 'block_1_expand_relu', 'block_1_pad', 'block_1_depthwise', 'block_1_depthwise_BN', 'block_1_depthwise_relu', 'block_1_project', 'block_1_project_BN', 'block_2_expand', 'block_2_expand_BN', 'block_2_expand_relu', 'block_2_depthwise', 'block_2_depthwise_BN', 'block_2_depthwise_relu', 'block_2_project', 'block_2_project_BN', 'block_2_add', 'block_3_expand', 'block_3_expand_BN', 'block_3_expand_relu', 'block_3_pad', 'block_3_depthwise', 'block_3_depthwise_BN', 'block_3_depthwise_relu', 'block_3_project', 'block_3_project_BN', 'block_4_expand', 'block_4_expand_BN', 'block_4_expand_relu', 'block_4_depthwise', 'block_4_depthwise_BN', 'block_4_depthwise_relu', 'block_4_project', 'block_4_project_BN', 'block_4_add', 'block_5_expand', 'block_5_expand_BN', 'block_5_expand_relu', 'block_5_depthwise', 'block_5_depthwise_BN', 'block_5_depthwise_relu', 'block_5_project', 'block_5_project_BN', 'block_5_add', 'block_6_expand', 'block_6_expand_BN', 'block_6_expand_relu', 'block_6_pad', 'block_6_depthwise', 'block_6_depthwise_BN', 'block_6_depthwise_relu', 'block_6_project', 'block_6_project_BN', 'block_7_expand', 'block_7_expand_BN', 'block_7_expand_relu', 'block_7_depthwise', 'block_7_depthwise_BN', 'block_7_depthwise_relu', 'block_7_project', 'block_7_project_BN', 'block_7_add', 'block_8_expand', 'block_8_expand_BN', 'block_8_expand_relu', 'block_8_depthwise', 'block_8_depthwise_BN', 'block_8_depthwise_relu', 'block_8_project', 'block_8_project_BN', 'block_8_add', 'block_9_expand', 'block_9_expand_BN', 'block_9_expand_relu', 'block_9_depthwise', 'block_9_depthwise_BN', 'block_9_depthwise_relu', 'block_9_project', 'block_9_project_BN', 'block_9_add', 'block_10_expand', 'block_10_expand_BN', 'block_10_expand_relu', 'block_10_depthwise', 'block_10_depthwise_BN', 'block_10_depthwise_relu', 'block_10_project', 'block_10_project_BN', 'block_11_expand', 'block_11_expand_BN', 'block_11_expand_relu', 'block_11_depthwise', 'block_11_depthwise_BN', 'block_11_depthwise_relu', 'block_11_project', 'block_11_project_BN', 'block_11_add', 'block_12_expand', 'block_12_expand_BN', 'block_12_expand_relu', 'block_12_depthwise', 'block_12_depthwise_BN', 'block_12_depthwise_relu', 'block_12_project', 'block_12_project_BN', 'block_12_add', 'block_13_expand', 'block_13_expand_BN', 'block_13_expand_relu', 'block_13_pad', 'block_13_depthwise', 'block_13_depthwise_BN', 'block_13_depthwise_relu', 'block_13_project', 'block_13_project_BN', 'block_14_expand', 'block_14_expand_BN', 'block_14_expand_relu', 'block_14_depthwise', 'block_14_depthwise_BN', 'block_14_depthwise_relu', 'block_14_project', 'block_14_project_BN', 'block_14_add', 'block_15_expand', 'block_15_expand_BN', 'block_15_expand_relu', 'block_15_depthwise', 'block_15_depthwise_BN', 'block_15_depthwise_relu', 'block_15_project', 'block_15_project_BN', 'block_15_add', 'block_16_expand', 'block_16_expand_BN', 'block_16_expand_relu', 'block_16_depthwise', 'block_16_depthwise_BN', 'block_16_depthwise_relu', 'block_16_project', 'block_16_project_BN', 'Conv_1', 'Conv_1_bn', 'out_relu', 'global_average_pooling2d', 'predictions', 'dense', 'dense_1']

然后您可以从您的模型中提取任何您想要的图层并将其保存到.tflite中。

# PART OF MODEL
tflite_model = tf.keras.models.load_model('face_recog.weights.best.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(tflite_model.get_layer('model'))
tflite_save = converter.convert()
open("face_recog.tflite", "wb").write(tflite_save)

所以使用上面的代码... ...tflite文件将有输入张量="input_1" 输出="dense_1"

然后在android里面,你必须使用特定层 "模型 "的输入,你会得到特定形状的输出,就像你在python中打印输出细节一样。

interpreter = tf.lite.Interpreter('face_recog.tflite')
print(interpreter.get_output_details())
interpreter.get_tensor_details()

Android部分。

// Initialize interpreter
@Throws(IOException::class)
private suspend fun initializeInterpreter(app: Application) = withContext(Dispatchers.IO) 
    // Load the TF Lite model from asset folder and initialize TF Lite Interpreter without NNAPI enabled.
    val assetManager = app.assets
    val model = loadModelFile(assetManager, "face_recog_model_layer.tflite")
    val options = Interpreter.Options()
    options.setUseNNAPI(false)
    interpreter = Interpreter(model, options)
    // Reads type and shape of input and output tensors, respectively.
    val imageTensorIndex = 0
    val imageShape: IntArray =
        interpreter.getInputTensor(imageTensorIndex).shape() 
    Log.i("INPUT_TENSOR_WHOLE", Arrays.toString(imageShape))
    val imageDataType: DataType =
        interpreter.getInputTensor(imageTensorIndex).dataType()
    Log.i("INPUT_DATA_TYPE", imageDataType.toString())
    val probabilityTensorIndex = 0
    val probabilityShape: IntArray =
        interpreter.getOutputTensor(probabilityTensorIndex).shape()
    Log.i("OUTPUT_TENSOR_SHAPE", Arrays.toString(probabilityShape))
    val probabilityDataType: DataType =
        interpreter.getOutputTensor(probabilityTensorIndex).dataType()
    Log.i("OUTPUT_DATA_TYPE", probabilityDataType.toString())
    Log.i(TAG, "Initialized TFLite interpreter.")



@Throws(IOException::class)
private fun loadModelFile(assetManager: AssetManager, filename: String): MappedByteBuffer 
    val fileDescriptor = assetManager.openFd(filename)
    val inputStream = FileInputStream(fileDescriptor.fileDescriptor)
    val fileChannel = inputStream.channel
    val startOffset = fileDescriptor.startOffset
    val declaredLength = fileDescriptor.declaredLength
    return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength)

我希望这能帮到你。当然,如果你还需要其他的东西,请标记我:)

以上是关于如何从特定层获取输出张量?的主要内容,如果未能解决你的问题,请参考以下文章

如何从 resnet 层循环遍历 4D(无,x,y,z)激活张量的 2D 矩阵?

张量(tensor)的阶、形状、数据类型

如何在简单的pytorch模型中重构cnn层的输出张量以供线性层使用

ValueError:使用自定义回调绘制卷积层特征图时,函数模型的输出张量必须是 TensorFlow `Layer` 的输出

BCEWithLogitsLoss:试图将预测标签的二进制输出作为张量,与输出层混淆

从 tf.layers.conv2d 了解输入/输出张量