Alexnet 神经网络:如何减少网络的消耗量?

Posted

技术标签:

【中文标题】Alexnet 神经网络:如何减少网络的消耗量?【英文标题】:Alexnet neutal network: how to decrease the consumption size of the network? 【发布时间】:2021-01-10 09:40:57 【问题描述】:

我的“caffe”只支持“CPU”-没有 GPU-;在大约 100mb 的数据集上运行 Alexnet 会消耗很大一部分内存——接近 400GB——;我希望能够以更少的内存消耗运行它。

现在我正在使用更小的数据集运行网络——大约 10 张图像用于训练和验证——;它运行良好,但我想增加我的数据集。

请帮助我,我对 AI 的了解非常浅薄,我希望有解决方案来减少 CPU 消耗的整体大小 + 运行更大的数据集。

solver.prototxt

net: "models/people_alexnet/train_val.prototxt"
test_iter: 100
test_interval: 200
base_lr: 0.01
lr_policy: "step"
gamma: 0.1
stepsize: 100000
display: 20
max_iter: 450000
momentum: 0.9
weight_decay: 0.0005
snapshot: 10000
snapshot_prefix: "models/people_alexnet/caffe_alexnet_train"
solver_mode: CPU

train_val.prototxt

name: "AlexNet"
layer 
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include 
    phase: TRAIN
  
  transform_param 
    mirror: true
    crop_size: 227
    mean_file: "data/people/mean.binaryproto"
  
  data_param 
    source: "examples/people/people_train_lmdb/"
    batch_size: 10
    backend: LMDB
  

layer 
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include 
    phase: TEST
  
  transform_param 
    mirror: false
    crop_size: 227
    mean_file: "data/people/mean.binaryproto"
  
  data_param 
    source: "examples/people/val_lmdb"
    batch_size: 5
    backend: LMDB
  

layer 
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param 
    lr_mult: 1
    decay_mult: 1
  
  param 
    lr_mult: 2
    decay_mult: 0
  
  convolution_param 
    num_output: 96
    kernel_size: 11
    stride: 4
    weight_filler 
      type: "gaussian"
      std: 0.01
    
    bias_filler 
      type: "constant"
      value: 0
    
  

layer 
  name: "relu1"
  type: "ReLU"
  bottom: "conv1"
  top: "conv1"

layer 
  name: "norm1"
  type: "LRN"
  bottom: "conv1"
  top: "norm1"
  lrn_param 
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  

layer 
  name: "pool1"
  type: "Pooling"
  bottom: "norm1"
  top: "pool1"
  pooling_param 
    pool: MAX
    kernel_size: 3
    stride: 2
  

layer 
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param 
    lr_mult: 1
    decay_mult: 1
  
  param 
    lr_mult: 2
    decay_mult: 0
  
  convolution_param 
    num_output: 256
    pad: 2
    kernel_size: 5
    group: 2
    weight_filler 
      type: "gaussian"
      std: 0.01
    
    bias_filler 
      type: "constant"
      value: 0.1
    
  

layer 
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"

layer 
  name: "norm2"
  type: "LRN"
  bottom: "conv2"
  top: "norm2"
  lrn_param 
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  

layer 
  name: "pool2"
  type: "Pooling"
  bottom: "norm2"
  top: "pool2"
  pooling_param 
    pool: MAX
    kernel_size: 3
    stride: 2
  

layer 
  name: "conv3"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3"
  param 
    lr_mult: 1
    decay_mult: 1
  
  param 
    lr_mult: 2
    decay_mult: 0
  
  convolution_param 
    num_output: 384
    pad: 1
    kernel_size: 3
    weight_filler 
      type: "gaussian"
      std: 0.01
    
    bias_filler 
      type: "constant"
      value: 0
    
  

layer 
  name: "relu3"
  type: "ReLU"
  bottom: "conv3"
  top: "conv3"

layer 
  name: "conv4"
  type: "Convolution"
  bottom: "conv3"
  top: "conv4"
  param 
    lr_mult: 1
    decay_mult: 1
  
  param 
    lr_mult: 2
    decay_mult: 0
  
  convolution_param 
    num_output: 384
    pad: 1
    kernel_size: 3
    group: 2
    weight_filler 
      type: "gaussian"
      std: 0.01
    
    bias_filler 
      type: "constant"
      value: 0.1
    
  

layer 
  name: "relu4"
  type: "ReLU"
  bottom: "conv4"
  top: "conv4"

layer 
  name: "conv5"
  type: "Convolution"
  bottom: "conv4"
  top: "conv5"
  param 
    lr_mult: 1
    decay_mult: 1
  
  param 
    lr_mult: 2
    decay_mult: 0
  
  convolution_param 
    num_output: 256
    pad: 1
    kernel_size: 3
    group: 2
    weight_filler 
      type: "gaussian"
      std: 0.01
    
    bias_filler 
      type: "constant"
      value: 0.1
    
  

layer 
  name: "relu5"
  type: "ReLU"
  bottom: "conv5"
  top: "conv5"

layer 
  name: "pool5"
  type: "Pooling"
  bottom: "conv5"
  top: "pool5"
  pooling_param 
    pool: MAX
    kernel_size: 3
    stride: 2
  

layer 
  name: "fc6"
  type: "InnerProduct"
  bottom: "pool5"
  top: "fc6"
  param 
    lr_mult: 1
    decay_mult: 1
  
  param 
    lr_mult: 2
    decay_mult: 0
  
  inner_product_param 
    num_output: 4096
    weight_filler 
      type: "gaussian"
      std: 0.005
    
    bias_filler 
      type: "constant"
      value: 0.1
    
  

layer 
  name: "relu6"
  type: "ReLU"
  bottom: "fc6"
  top: "fc6"

layer 
  name: "drop6"
  type: "Dropout"
  bottom: "fc6"
  top: "fc6"
  dropout_param 
    dropout_ratio: 0.5
  

layer 
  name: "fc7"
  type: "InnerProduct"
  bottom: "fc6"
  top: "fc7"
  param 
    lr_mult: 1
    decay_mult: 1
  
  param 
    lr_mult: 2
    decay_mult: 0
  
  inner_product_param 
    num_output: 4096
    weight_filler 
      type: "gaussian"
      std: 0.005
    
    bias_filler 
      type: "constant"
      value: 0.1
    
  

layer 
  name: "relu7"
  type: "ReLU"
  bottom: "fc7"
  top: "fc7"

layer 
  name: "drop7"
  type: "Dropout"
  bottom: "fc7"
  top: "fc7"
  dropout_param 
    dropout_ratio: 0.5
  

layer 
  name: "fc8"
  type: "InnerProduct"
  bottom: "fc7"
  top: "fc8"
  param 
    lr_mult: 1
    decay_mult: 1
  
  param 
    lr_mult: 2
    decay_mult: 0
  
  inner_product_param 
    num_output: 2
    weight_filler 
      type: "gaussian"
      std: 0.01
    
    bias_filler 
      type: "constant"
      value: 0
    
  

layer 
  name: "accuracy"
  type: "Accuracy"
  bottom: "fc8"
  bottom: "label"
  top: "accuracy"
  include 
    phase: TEST
  

layer 
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "fc8"
  bottom: "label"
  top: "loss"

【问题讨论】:

【参考方案1】:

我设法通过减小“train_val.prototxt”中的“batch-size”以减小大小运行网络:

  data_param 
    source: "examples/people/people_val_lmdb"
    batch_size: 2 -> #changed from 5
    backend: LMDB
  

现在网络只需要 12GB:

Memory required for data: 1236704

【讨论】:

以上是关于Alexnet 神经网络:如何减少网络的消耗量?的主要内容,如果未能解决你的问题,请参考以下文章

AlexNet论文总结

AndroidStudio如何连接夜神模拟器

3.2使用PyTorch搭建AlexNet并训练花分类数据集

卷积神经网络03-1 AlexNet网络结构与原理分析

CNN-2: AlexNet 卷积神经网络模型

第61篇AlexNet:CNN开山之作