MNN的安装以及新增onnx算子

Posted cc96

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了MNN的安装以及新增onnx算子相关的知识,希望对你有一定的参考价值。

MNN的安装以及新增onnx算子

ubuntu 上安装mnn

参考链接:https://www.yuque.com/mnn/cn/build_linux

git clone https://gitee.com/mirrors/mnn.git
cd MNN/
cd schema/
./generate.sh && cd ../
mkdir build && cd build
sudo apt-get install libprotobuf-dev protobuf-compiler
cmake .. -DMNN_BUILD_CONVERTER=true #加了编译选项 生成转换工具
make -j8

如果源码编译 pymnn,则按下面步骤即可

git clone https://gitee.com/mirrors/mnn.git
cd MNN/
cd schema/
./generate.sh && cd ../
cd pymnn/pip_package
python build_deps.py 
python build_wheel.py

使用pymnn遇到的错误

源码编译pymnn,编译完之后使用mnn命令会出现下面错误,原因是libprotobuf找不到

ImportError: /home/cc/miniconda3/envs/pymnn/lib/python3.6/site-packages/MNN-0.0.9-py3.6-linux-x86_64.egg/_tools.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZTIN6google8protobuf7MessageE

解决办法:修改pymnn/pip_package/setup.py 给定libprotobuf绝对路径

#tools_extra_link_args += [\'-l:libprotobuf.a\']
tools_extra_link_args += [\'/usr/local/lib/libprotobuf.a\']

编译时下面这个错误是因为没有带fPIC编译的protobuf引起

cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
g++ -pthread -shared -B /home/cc/miniconda3/envs/pymnn/compiler_compat -L/home/cc/miniconda3/envs/pymnn/lib -Wl,-rpath=/home/cc/miniconda3/envs/pymnn/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/media/cc/0D2C17A90D2C17A9/git_clone/MNN/mnn/pymnn/src/MNNTools.o build/temp.linux-x86_64-3.6/media/cc/0D2C17A90D2C17A9/git_clone/MNN/mnn/tools/quantization/calibration.o build/temp.linux-x86_64-3.6/media/cc/0D2C17A90D2C17A9/git_clone/MNN/mnn/tools/quantization/TensorStatistic.o build/temp.linux-x86_64-3.6/media/cc/0D2C17A90D2C17A9/git_clone/MNN/mnn/tools/quantization/quantizeWeight.o build/temp.linux-x86_64-3.6/media/cc/0D2C17A90D2C17A9/git_clone/MNN/mnn/tools/quantization/Helper.o -L/media/cc/0D2C17A90D2C17A9/git_clone/MNN/mnn/pymnn_build -L/media/cc/0D2C17A90D2C17A9/git_clone/MNN/mnn/pymnn_build/tools/converter -o build/lib.linux-x86_64-3.6/_tools.cpython-36m-x86_64-linux-gnu.so -Wl,--whole-archive -lMNN -lMNNConvertDeps /usr/local/lib/libprotobuf.a -Wl,--no-whole-archive -lz -Wl,-rpath,$ORIGIN/lib
/home/cc/miniconda3/envs/pymnn/compiler_compat/ld: /usr/local/lib/libprotobuf.a(arena.o): relocation R_X86_64_TPOFF32 against symbol `_ZN6google8protobuf8internal9ArenaImpl13thread_cache_E\' can not be used when making a shared object; recompile with -fPIC
/home/cc/miniconda3/envs/pymnn/compiler_compat/ld: /usr/local/lib/libprotobuf.a(time.o): relocation R_X86_64_PC32 against symbol `_ZN6google8protobuf8internal17DateTimeToSecondsERKNS1_8DateTimeEPl\' can not be used when making a shared object; recompile with -fPIC
/home/cc/miniconda3/envs/pymnn/compiler_compat/ld: final link failed: bad value
collect2: error: ld returned 1 exit status
error: command \'g++\' failed with exit status 1

解决方法:重新编译protobuf,下面是源码编译 protobuf的操作

./autogen.sh
./configure CFLAGS="-fPIC"  CXXFLAGS="-fPIC"
make
make check
sudo make install
sudo ldconfig

使用mnn时出现

import MNN.tools.mnn_fb.OpType as OpType
AttributeError: module \'MNN\' has no attribute \'tools\'

解决办法:修改出错文件

import MNN.tools.mnn_fb.OpType as OpType
import MNN.tools.utils.getkey as GetKey

改成如下所示

import sys
sys.path.append("/home/cc/miniconda3/envs/pymnn/lib/python3.6/site-packages/MNN/tools")
import mnn_fb.OpType as OpType
import utils.getkey as GetKey

MNN增加自定义算子

参考文档:https://www.yuque.com/mnn/cn/customize_op

1. 添加模型描述

若添加的算子不在MNN的算子列表中,需要添加模型描述。修改完模型描述后,需要调用generate脚本重新生成模型描述头文件。

  • 添加算子类型

schema/default/MNN.fbs文件的OpType列表里追加算子名称,未添加算子类型

  • 添加算子参数描述

  1. schema/default/MNN.fbs文件的OpParameter列表中添加了“TanHParam”
union OpParameter {
    TanHParam,
    QuantizedAdd,
    ArgMax,
  1. 添加参数描述:在TensorflowOp.fbs中添加TanHParam的参数描述,以及在ReductionType中添加NORML2描述
enum TanHType : byte{
    TanH=0,
    Tanh=1,
}

table TanHParam{
    operation:TanHType;
}
enum ReductionType : byte{
    SUM = 0,
    ASUM = 1,
    SUMSQ = 2,
    MEAN = 3,
    MAXIMUM = 4,
    MINIMUM = 5,
    PROD = 6,
    ANY = 7,
    ALL = 8,
    NORML2 = 9,
}

2. 添加模型转换

  1. /tools/converter/source/onnx下添加 MyMatMulOnnx.cppTanhOnnx.cpp
#include <stdio.h>
#include "onnxOpConverter.hpp"

DECLARE_OP_CONVERTER(MyMatMulOnnx);

MNN::OpType MyMatMulOnnx::opType() {
    return MNN::OpType_MatMul;
}
MNN::OpParameter MyMatMulOnnx::type() {
    return MNN::OpParameter_NONE;
}

void MyMatMulOnnx::run(MNN::OpT* dstOp, const onnx::NodeProto* onnxNode,
                     std::vector<const onnx::TensorProto*> initializers) {
    return ;
}

REGISTER_CONVERTER(MyMatMulOnnx, MatMul);
#include <string.h>
#include "onnxOpConverter.hpp"


DECLARE_OP_CONVERTER(TanhOnnx);

MNN::OpType TanhOnnx::opType() {
    return MNN::OpType_TanH;
}
MNN::OpParameter TanhOnnx::type() {
    return MNN::OpParameter_NONE;
}

void TanhOnnx::run(MNN::OpT *dstOp, const onnx::NodeProto *onnxNode,
                     std::vector<const onnx::TensorProto *> initializers) {
    auto param = new MNN::TanHParamT;                     
    auto type = onnxNode->op_type();
    if(type=="Tanh")
    {
        param->operation=MNN::TanHType_Tanh;
    }
    else if(type=="TanH")
    {
        param->operation=MNN::TanHType_TanH;
    }
    dstOp->main.value = param;
    return ;
}
REGISTER_CONVERTER(TanhOnnx, TanH);
REGISTER_CONVERTER(TanhOnnx, Tanh);
  1. /tools/converter/source/onnx下修改 ReduceOnnx.cpponnxConverter.cpp
  • ReduceOnnx.cpp: 添加 ReduceL2 和 ReduceMax 的选项
    auto type = onnxNode->op_type();
    /*2019-11-24 曹冲 添加ReduceL2 和 ReduceMax的选项*/
    if (type == "ReduceMean") {
        param->operation = MNN::ReductionType_MEAN;
    } 
    else if(type == "ReduceMax")
    {
        param->operation = MNN::ReductionType_MAXIMUM;
    }
    else if(type == "ReduceL2"){
        param->operation = MNN::ReductionType_NORML2;
    }
    else
    {
        DLOG(ERROR) << "TODO ==> " << type;
    }

    param->dType      = MNN::DataType_DT_FLOAT;
    param->dim        = axes;
    param->keepDims   = keepdims;
    dstOp->main.value = param;
}
/*2019-11-24 曹冲 注册ReduceL2 和 ReduceMax*/
REGISTER_CONVERTER(ReduceOnnx, ReduceMean);
REGISTER_CONVERTER(ReduceOnnx, ReduceMax);
REGISTER_CONVERTER(ReduceOnnx, ReduceL2);
  • onnxConverter.cpp: 修改72行 修改输入格式 NCHW --> NC4HW4 or NHWC(测试无论是NC4HW4还是NHWC都是可以得到正确结果且不报错,而NCHW则会报错)
//修改 输入格式 NCHW --> NC4HW4 or NHWC
inputParam->dformat = MNN::MNN_DATA_FORMAT_NHWC;

3. 添加维度计算

  1. /source/shape下更改ShapeReduction.cpp:REGISTER_SHAPE_INPUTS(ReductionComputer, OpType_Reduction, {1}) --> REGISTER_SHAPE(ReductionComputer, OpType_Reduction) (虽然改动了但并未测试原始的是否可用)
  2. /source/shape下更改ShapeMatMul.cpp: 添加一个if判断,判断参数指针是否为空,为空则做我添加的shape判断,不为空则做原始的shape判断。
class MatMulSizeComputer : public SizeComputer {
    virtual bool onComputeSize(const MNN::Op* op, const std::vector<Tensor*>& inputs,
                               const std::vector<Tensor*>& outputs) const override {
        MNN_ASSERT(2 == inputs.size());
        MNN_ASSERT(1 == outputs.size());
        MNN_ASSERT(2 == inputs[0]->buffer().dimensions);
        MNN_ASSERT(2 == inputs[1]->buffer().dimensions);

        if(op->main_as_MatMul()==nullptr)
        {
            auto output = outputs[0];
            TensorUtils::copyShape(inputs[0], output, true);
            auto w0 = inputs[0]->length(1);
            auto h0 = inputs[0]->length(0);


            auto w1 = inputs[1]->length(1);
            auto h1 = inputs[1]->length(0);

            if (w0 != h1) {
                return false;
            }
            output->buffer().type = inputs[0]->buffer().type;
            output->setLength(0, h0);
            output->setLength(1, w1);
            TensorUtils::getDescribe(output)->dimensionFormat = TensorUtils::getDescribe(inputs[0])->dimensionFormat;
            return true;
        }
        else
        {
            auto matMul = op->main_as_MatMul();

            auto output = outputs[0];
            TensorUtils::copyShape(inputs[0], output, true);
            auto w0 = inputs[0]->length(1);
            auto h0 = inputs[0]->length(0);

            if (matMul->transposeA()) {
                auto t = w0;
                w0     = h0;
                h0     = t;
            }

            auto w1 = inputs[1]->length(1);
            auto h1 = inputs[1]->length(0);
            if (matMul->transposeB()) {
                auto t = w1;
                w1     = h1;
                h1     = t;
            }

            if (w0 != h1) {
                return false;
            }
            output->buffer().type = inputs[0]->buffer().type;
            output->setLength(0, h0);
            output->setLength(1, w1);
            TensorUtils::getDescribe(output)->dimensionFormat = TensorUtils::getDescribe(inputs[0])->dimensionFormat;
            return true;
        }

    }
};

4. 添加实现

  • 添加算子类型

  1. source/backend/CPU目录下更改CPUMatMul.cpp,添加if判断
if(op->main_as_MatMul()==nullptr)
{
    return new CPUMatMul(backend, false, false);
}
else
{
    auto param = op->main_as_MatMul();
    return new CPUMatMul(backend, param->transposeA(), param->transposeB());
}
  1. source/backend/CPU目录下更改CPUReduction.cpp,添加ReductionType_NORML2
switch (op->main_as_ReductionParam()->operation()) {
        case ReductionType_MEAN:
            return new MeanReduce(backend, op);
        case ReductionType_SUM:
            return new SumReduce(backend, op);
        case ReductionType_MINIMUM:
            return new MinReduce(backend, op);
        case ReductionType_MAXIMUM:
            return new MaxReduce(backend, op);
        case ReductionType_PROD:
            return new ProdReduce(backend, op);
        case ReductionType_ANY:
            return new AnyReduce(backend, op);
        case ReductionType_ALL:
            return new AllReduce(backend, op);
        case ReductionType_NORML2:
            return new ReduceL2(backend, op);
        default:
            MNN_ASSERT(false);
            break;
    }
class ReduceL2 : public Reduction {
public:
    ReduceL2(Backend* backend, const Op* op) : Reduction(backend, op) {
        // nothing to do
    }
    virtual ~ReduceL2() = default;

protected:
    virtual void onReduce(const float* src, float* dst, int inside, int outside, int axisSize) const override {
        for (int oi = 0; oi < outside; ++oi) {
            auto srcOutSide = src + oi * axisSize * inside;
            auto dstOutSide = dst + oi * inside;
            for (int ii = 0; ii < inside; ++ii) {
                auto srcInside = srcOutSide + ii;
                auto dstInside = dstOutSide + ii;
                float summer   = 0.0f;
                for (int a = 0; a < axisSize; ++a) {
                    summer += srcInside[a * inside]*srcInside[a * inside];
                }
                *dstInside = sqrt(summer);
            }
        }
    }

    virtual void onReduce(const int32_t* src, int32_t* dst, int inside, int outside, int axisSize) const override {
        for (int oi = 0; oi < outside; ++oi) {
            auto srcOutSide = src + oi * axisSize * inside;
            auto dstOutSide = dst + oi * inside;
            for (int ii = 0; ii < inside; ++ii) {
                auto srcInside = srcOutSide + ii;
                auto dstInside = dstOutSide + ii;
                int32_t summer = 0;
                for (int a = 0; a < axisSize; ++a) {
                    summer += srcInside[a * inside]*srcInside[a * inside];
                }
                *dstInside = sqrt(summer);
            }
        }
    }
};
  1. source/backend/CPU目录下更改 CPUTanh.cppCPUTanh.hpp ,新建模板类,原始MNN Tanh的实现和自定义Tanh都继承模板类。
class CPUTanh : public Tanh_temp{
public:
    CPUTanh(Backend* backend) : Tanh_temp(backend){

    }
    virtual ~CPUTanh() = default;
    ErrorCode onResize(const std::vector<Tensor*>& inputs, const std::vector<Tensor*>& outputs) override {
        return NO_ERROR;
    }

    ErrorCode onExecute(const std::vector<Tensor *> &inputs, const std::vector<Tensor *> &outputs) override {
        MNN_ASSERT(1 == inputs.size());
        MNN_ASSERT(1 == outputs.size());
        auto inputData  = inputs[0]->host<float>();
        auto outputData = outputs[0]->host<float>();


        const int dataSize = outputs[0]->elementSize();

        Tanh_func(outputData, inputData, dataSize);
        return NO_ERROR;
    }
protected:
    virtual void Tanh_func(float* dst, const float* src, size_t dataSize)const override
    {
        for (int i = 0; i < dataSize; i++) {
            dst[i] = (expf(src[i])-expf(-1*src[i]))/(expf(src[i])+expf(-1*src[i]));
        }
    }
};

以上是关于MNN的安装以及新增onnx算子的主要内容,如果未能解决你的问题,请参考以下文章

yolov3 MNN框架部署C++版

ONNX 新特性大解读和最佳实践分享|直播预告

OneFlow-ONNX v0.6.0正式发布

pytorch 自定义算子 onnx1.9 tensorrt8.2

ONNX算子:Pad

ONNX算子:Softplus