pegasus help

Posted cv.exp

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了pegasus help相关的知识,希望对你有一定的参考价值。

~/VeriSilicon$ pegasus help
2023-01-27 09:02:57.629258: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/user/VeriSilicon/acuity-toolkit-binary-6.6.1/bin/acuitylib:/usr/local/cuda-11.3/lib64:/usr/local/cuda-11.3/lib64:
2023-01-27 09:02:57.629764: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
D Get binary package acuity_path [/home/user/VeriSilicon/acuity-toolkit-binary-6.6.1/bin]
usage: pegasus(.py) <command> [<args>]

There are common pegasus commands used in various situations:

Import models.(import)
    caffe                                    Import caffe model.
    tensorflow                               Import tensorflow model.
    tflite                                   Import tflite model.
    darknet                                  Import darknet model.
    onnx                                     Import onnx model.
    pytorch                                  Import pytorch model.
    keras                                    Import keras model.

Export models.(export)
    ovxlib                                   EXport ovxlib code.
    ide                                      Export ide code.
    tflite                                   Export tflite model.

Generate metas.(generate)
    inputmeta                                Generate input meta data.
    postprocess-file                         Generate postprocess file.
    fakedata                                 Generate fake data of cofficients.

prune models.(prune)
    --model                                  Network model file.
    --model-data                             Network coefficient file.
    --output-data                            Network coefficient file after pruning.
If not specified, data_input file will be overwritten
    --config-file                            Prune config file containing layer_name and prune percentage.
If file does not exist, a stub will be generated
    --prune-percent                          Purne percentage of each layer, from 0.0 to 100.0
    --prune-level                            Specify the pruning granularity levels [element | vector | kernel | filter]
  - element: pruning granularity down to individual weight element (1)
  - vector: a vector or row of a 2D convolution kernel (Kx)
  - kernel: 2D convolution kernel (Kx * Ky)
  - filter: 3D convolution filter (Kx * Ky * Kz)


Inference model and get result.(inference)
    --model                                  Network model input file.
    --model-data                             Network coefficient input file.
    --model-quantize                         Quantized tensor description file.
    --batch-size                             Batch size.
    --iterations                             Running iterations.
    --device                                 Specify the compute device.
    --with-input-meta                        Merge input meta into network.
    --output-dir                             Output directory of generated files.
    --dtype                                  Data type used.
    --postprocess                            Postprocess task.
    --postprocess-file                       Postprocess task configure file.

Quantize model.(quantize)
    --model                                  Network model input file.
    --model-data                             Network coefficient input file.
    --model-quantize                         Quantized tensor description file.
    --batch-size                             Batch size.
    --iterations                             Running iterations.
    --device                                 Specify the compute device.
    --with-input-meta                        Merge input meta into network.
    --output-dir                             Output directory of generated files.
    --quantizer                              Quantizer type.
    --qtype                                  Quantization data type. e.g. "int8", "uint8", "int16", "bfloat16", "qbfloat16", "int4", "uint4".
    --hybrid                                 Hybrid quantize.
    --rebuild                                Rebuild quantize tab.
    --rebuild-all                            Rebuild quantize table for all.
    --algorithm                              Quantization algorithm.
    --moving-average-weight                  Moving average coef.
    --divergence-nbins                       KL divergence histogram nbins.
    --divergence-first-quantize-bits         KL divergence first quantize bits.
    --compute-entropy                        compute tensor entropy.
    --MLE                                    minimize per layer error

Train model.(train)
    --model                                  Network model input file.
    --model-data                             Network coefficient input file.
    --model-quantize                         Quantized tensor description file.
    --batch-size                             Batch size.
    --iterations                             Running iterations.
    --device                                 Specify the compute device.
    --with-input-meta                        Merge input meta into network.
    --output-dir                             Output directory of generated files.
    --dtype                                  Data type used.
    --lr                                     Learning rate.
    --optimizer                              Training gradient optimizer.
    --decay-steps                            Momentum decay steps.
    --iterations-to-save-checkpoint          Iterations to save checkpoints .
    --checkpoint-path                        Checkpoint path.
    --max-checkpoint-num                     Max number of checkpoints.

Dump model activations.(dump)
    --model                                  Network model input file.
    --model-data                             Network coefficient input file.
    --model-quantize                         Quantized tensor description file.
    --batch-size                             Batch size.
    --iterations                             Running iterations.
    --device                                 Specify the compute device.
    --with-input-meta                        Merge input meta into network.
    --output-dir                             Output directory of generated files.
    --dtype                                  Data type used.
    --format                                 When saving snapshot, save data in nchw/nhwc format.
    --save-quantize                          Save data in quantized format
    --save-file-type                         When saving snapshot, save file type.

Get amount of calculation, parameter and activation.(measure)
    --model                                  Network model input file.
    --model-quantize                         Quantized tensor description file.
    --dtype                                  Data type used.
    --output-dir                             Output directory of generated files.

I ----------------Error(0),Warning(0)----------------

以上是关于pegasus help的主要内容,如果未能解决你的问题,请参考以下文章

深入剖析机器学习框架平台Pegasus

Pegasus学习三:服务通信框架介绍

机器学习深入剖析DotC United Group机器学习框架平台Pegasus

OpenHarmony与Pegasus物联网开发套件简介

我和Apache Pegasus

Hbase与pegasus对比