Caffecaffe可视化训练过程实操

Posted Taily老段

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Caffecaffe可视化训练过程实操相关的知识,希望对你有一定的参考价值。

http://ethereon.github.io/netscope/#/editor 这个可以可视化caffemodel模型


使用lenet网络做训练,将两个loss曲线一个accuracy曲线画在一个图上

#!/usr/bin/python
#coding:utf-8
import os
import sys
import numpy as np
import matplotlib.pyplot as plt

caffe_root = '/usr/local/Cellar/caffe/'

sys.path.insert(0, caffe_root + 'python')
import caffe
# MODEL_FILE = caffe_root+'examples/mnist/lenet.prototxt'
# PRETRAINED = caffe_root+'examples/mnist/lenet_iter_10000.caffemodel'
# IMAGE_FILE = caffe_root+'examples/images/test4.bmp'

import matplotlib.pyplot as plt
from numpy import zeros, arange
from math import ceil
import caffe

#caffe.set_device(0)
caffe.set_mode_cpu()
# 使用SGDsolver,即随机梯度下降算法,这个要看你solver文件里,一般不写的话就是sgd
# solver=caffe.AdamSolver('/root/caffe/examples/mnist/lenet_solver_adam.prototxt')
solver = caffe.SGDSolver(caffe_root + 'examples/mnist/lenet_solver_test_show.prototxt')

# 等价于solver文件中的max_iter,即最大解算次数
niter = 10000
# 每隔100次收集一次数据
display = 100

# 每次测试进行100次解算,10000/100
test_iter = 100
# 每500次训练进行一次测试(100次解算),60000/64
test_interval = 500

# 初始化
train_loss = zeros(int(niter * 1.0 / display))
test_loss = zeros(int(niter * 1.0 / test_interval))
test_acc = zeros(int(niter * 1.0 / test_interval))

# iteration 0,不计入
solver.step(1)

# 辅助变量
_train_loss = 0;
_test_loss = 0;
_accuracy = 0
# 进行解算
for it in range(niter):
    # 进行一次解算
    solver.step(1)
    # 每迭代一次,训练batch_size张图片
    _train_loss += solver.net.blobs['loss'].data  # 注意,这里的loss表示你定义网络中loss层使用的名称,原博客中定义该网络使用的是SoftmaxWithLoss

    if it % display == 0:
        # 计算平均train loss
        train_loss[it // display] = _train_loss / display
        _train_loss = 0

    if it % test_interval == 0:
        for test_it in range(test_iter):
            # 进行一次测试
            solver.test_nets[0].forward()
            # 计算test loss
            _test_loss += solver.test_nets[0].blobs['loss'].data  # loss名称和上面的一样
            # 计算test accuracy
            _accuracy += solver.test_nets[0].blobs['accuracy'].data  # 这里和上面一样需要注意一下

        # 计算平均test loss
        test_loss[it / test_interval] = _test_loss / test_iter
        # 计算平均test accuracy
        test_acc[it / test_interval] = _accuracy / test_iter
        _test_loss = 0
        _accuracy = 0

    # 绘制train loss、test loss和accuracy曲线
print '\\nplot the train loss and test accuracy\\n'
_, ax1 = plt.subplots()
ax2 = ax1.twinx()

# train loss -> 绿色
ax1.plot(display * arange(len(train_loss)), train_loss, 'g')
# test loss -> 黄色
ax1.plot(test_interval * arange(len(test_loss)), test_loss, 'y')
# test accuracy -> 红色
ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r')

ax1.set_xlabel('iteration')
ax1.set_ylabel('loss')
ax2.set_ylabel('accuracy')
plt.show()

#coding:utf-8

添加后支持中文字符;

caffe_root = '/usr/local/Cellar/caffe/'

配置自己的caffe路径;

solver = caffe.SGDSolver(caffe_root + 'examples/mnist/lenet_solver_test_show.prototxt')

配置自己的solver文件路径;(注:其中的net和snapshot_prefix建议配置绝对路径)

net: "/usr/local/Cellar/caffe/examples/mnist/lenet_train_test.prototxt"

snapshot_prefix: "/Users/taily/pycharmproj/"

训练完成后显示训练过程的loss曲线和accuracy曲线:

绿色——训练数据上的loss

黄色——测试数据上的loss

红色——准确率accuracy(近1000次迭代后准确率就接近1了,后续迭代缓慢提高了accurary)

训练过程中打印的log:

/anaconda2/envs/py27/bin/python /Users/taily/pycharmproj/test-caffe-show/test_digit.py
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1228 17:55:50.809736 2733998976 upgrade_proto.cpp:1113] snapshot_prefix was a directory and is replaced to /Users/taily/pycharmproj/lenet_solver_test_show
I1228 17:55:50.809761 2733998976 solver.cpp:45] Initializing solver from parameters: 
test_iter: 100
test_interval: 500
base_lr: 0.01
display: 100
max_iter: 10000
lr_policy: "inv"
gamma: 0.0001
power: 0.75
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "/Users/taily/pycharmproj/lenet_solver_test_show"
solver_mode: CPU
net: "/usr/local/Cellar/caffe/examples/mnist/lenet_train_test.prototxt"
I1228 17:55:50.809949 2733998976 solver.cpp:102] Creating training net from net file: /usr/local/Cellar/caffe/examples/mnist/lenet_train_test.prototxt
I1228 17:55:50.810111 2733998976 net.cpp:296] The NetState phase (0) differed from the phase (1) specified by a rule in layer mnist
I1228 17:55:50.810127 2733998976 net.cpp:296] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I1228 17:55:50.810133 2733998976 net.cpp:53] Initializing net from parameters: 
name: "LeNet"
state 
  phase: TRAIN

layer 
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include 
    phase: TRAIN
  
  transform_param 
    scale: 0.00390625
  
  data_param 
    source: "/usr/local/Cellar/caffe/examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  

layer 
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param 
    lr_mult: 1
  
  param 
    lr_mult: 2
  
  convolution_param 
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler 
      type: "xavier"
    
    bias_filler 
      type: "constant"
    
  

layer 
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param 
    pool: MAX
    kernel_size: 2
    stride: 2
  

layer 
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param 
    lr_mult: 1
  
  param 
    lr_mult: 2
  
  convolution_param 
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler 
      type: "xavier"
    
    bias_filler 
      type: "constant"
    
  

layer 
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param 
    pool: MAX
    kernel_size: 2
    stride: 2
  

layer 
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param 
    lr_mult: 1
  
  param 
    lr_mult: 2
  
  inner_product_param 
    num_output: 500
    weight_filler 
      type: "xavier"
    
    bias_filler 
      type: "constant"
    
  

layer 
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"

layer 
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param 
    lr_mult: 1
  
  param 
    lr_mult: 2
  
  inner_product_param 
    num_output: 10
    weight_filler 
      type: "xavier"
    
    bias_filler 
      type: "constant"
    
  

layer 
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"

I1228 17:55:50.810202 2733998976 layer_factory.hpp:77] Creating layer mnist
I1228 17:55:50.810294 2733998976 db_lmdb.cpp:35] Opened lmdb /usr/local/Cellar/caffe/examples/mnist/mnist_train_lmdb
I1228 17:55:50.810323 2733998976 net.cpp:86] Creating Layer mnist
I1228 17:55:50.810329 2733998976 net.cpp:382] mnist -> data
I1228 17:55:50.810349 2733998976 net.cpp:382] mnist -> label
I1228 17:55:50.810376 2733998976 data_layer.cpp:45] output data size: 64,1,28,28
I1228 17:55:50.815076 2733998976 net.cpp:124] Setting up mnist
I1228 17:55:50.815102 2733998976 net.cpp:131] Top shape: 64 1 28 28 (50176)
I1228 17:55:50.815109 2733998976 net.cpp:131] Top shape: 64 (64)
I1228 17:55:50.815114 2733998976 net.cpp:139] Memory required for data: 200960
I1228 17:55:50.815119 2733998976 layer_factory.hpp:77] Creating layer conv1
I1228 17:55:50.815127 2733998976 net.cpp:86] Creating Layer conv1
I1228 17:55:50.815132 2733998976 net.cpp:408] conv1 <- data
I1228 17:55:50.815138 2733998976 net.cpp:382] conv1 -> conv1
I1228 17:55:50.815182 2733998976 net.cpp:124] Setting up conv1
I1228 17:55:50.815187 2733998976 net.cpp:131] Top shape: 64 20 24 24 (737280)
I1228 17:55:50.815193 2733998976 net.cpp:139] Memory required for data: 3150080
I1228 17:55:50.815203 2733998976 layer_factory.hpp:77] Creating layer pool1
I1228 17:55:50.815213 2733998976 net.cpp:86] Creating Layer pool1
I1228 17:55:50.815218 2733998976 net.cpp:408] pool1 <- conv1
I1228 17:55:50.815223 2733998976 net.cpp:382] pool1 -> pool1
I1228 17:55:50.815233 2733998976 net.cpp:124] Setting up pool1
I1228 17:55:50.815238 2733998976 net.cpp:131] Top shape: 64 20 12 12 (184320)
I1228 17:55:50.815246 2733998976 net.cpp:139] Memory required for data: 3887360
I1228 17:55:50.815250 2733998976 layer_factory.hpp:77] Creating layer conv2
I1228 17:55:50.815256 2733998976 net.cpp:86] Creating Layer conv2
I1228 17:55:50.815260 2733998976 net.cpp:408] conv2 <- pool1
I1228 17:55:50.815266 2733998976 net.cpp:382] conv2 -> conv2
I1228 17:55:50.815492 2733998976 net.cpp:124] Setting up conv2
I1228 17:55:50.815498 2733998976 net.cpp:131] Top shape: 64 50 8 8 (204800)
I1228 17:55:50.815503 2733998976 net.cpp:139] Memory required for data: 4706560
I1228 17:55:50.815510 2733998976 layer_factory.hpp:77] Creating layer pool2
I1228 17:55:50.815515 2733998976 net.cpp:86] Creating Layer pool2
I1228 17:55:50.815521 2733998976 net.cpp:408] pool2 <- conv2
I1228 17:55:50.815526 2733998976 net.cpp:382] pool2 -> pool2
I1228 17:55:50.815531 2733998976 net.cpp:124] Setting up pool2
I1228 17:55:50.815572 2733998976 net.cpp:131] Top shape: 64 50 4 4 (51200)
I1228 17:55:50.815589 2733998976 net.cpp:139] Memory required for data: 4911360
I1228 17:55:50.815593 2733998976 layer_factory.hpp:77] Creating layer ip1
I1228 17:55:50.815603 2733998976 net.cpp:86] Creating Layer ip1
I1228 17:55:50.815608 2733998976 net.cpp:408] ip1 <- pool2
I1228 17:55:50.815613 2733998976 net.cpp:382] ip1 -> ip1
I1228 17:55:50.818809 2733998976 net.cpp:124] Setting up ip1
I1228 17:55:50.818819 2733998976 net.cpp:131] Top shape: 64 500 (32000)
I1228 17:55:50.818823 2733998976 net.cpp:139] Memory required for data: 5039360
I1228 17:55:50.818830 2733998976 layer_factory.hpp:77] Creating layer relu1
I1228 17:55:50.818840 2733998976 net.cpp:86] Creating Layer relu1
I1228 17:55:50.818843 2733998976 net.cpp:408] relu1 <- ip1
I1228 17:55:50.818848 2733998976 net.cpp:369] relu1 -> ip1 (in-place)
I1228 17:55:50.818855 2733998976 net.cpp:124] Setting up relu1
I1228 17:55:50.818857 2733998976 net.cpp:131] Top shape: 64 500 (32000)
I1228 17:55:50.818862 2733998976 net.cpp:139] Memory required for data: 5167360
I1228 17:55:50.818866 2733998976 layer_factory.hpp:77] Creating layer ip2
I1228 17:55:50.818871 2733998976 net.cpp:86] Creating Layer ip2
I1228 17:55:50.818876 2733998976 net.cpp:408] ip2 <- ip1
I1228 17:55:50.818881 2733998976 net.cpp:382] ip2 -> ip2
I1228 17:55:50.818918 2733998976 net.cpp:124] Setting up ip2
I1228 17:55:50.818923 2733998976 net.cpp:131] Top shape: 64 10 (640)
I1228 17:55:50.818928 2733998976 net.cpp:139] Memory required for data: 5169920
I1228 17:55:50.818933 2733998976 layer_factory.hpp:77] Creating layer loss
I1228 17:55:50.818941 2733998976 net.cpp:86] Creating Layer loss
I1228 17:55:50.818945 2733998976 net.cpp:408] loss <- ip2
I1228 17:55:50.818949 2733998976 net.cpp:408] loss <- label
I1228 17:55:50.818954 2733998976 net.cpp:382] loss -> loss
I1228 17:55:50.818962 2733998976 layer_factory.hpp:77] Creating layer loss
I1228 17:55:50.818972 2733998976 net.cpp:124] Setting up loss
I1228 17:55:50.818977 2733998976 net.cpp:131] Top shape: (1)
I1228 17:55:50.818981 2733998976 net.cpp:134]     with loss weight 1
I1228 17:55:50.818989 2733998976 net.cpp:139] Memory required for data: 5169924
I1228 17:55:50.818992 2733998976 net.cpp:200] loss needs backward computation.
I1228 17:55:50.818996 2733998976 net.cpp:200] ip2 needs backward computation.
I1228 17:55:50.819001 2733998976 net.cpp:200] relu1 needs backward computation.
I1228 17:55:50.819005 2733998976 net.cpp:200] ip1 needs backward computation.
I1228 17:55:50.819010 2733998976 net.cpp:200] pool2 needs backward computation.
I1228 17:55:50.819012 2733998976 net.cpp:200] conv2 needs backward computation.
I1228 17:55:50.819017 2733998976 net.cpp:200] pool1 needs backward computation.
I1228 17:55:50.819021 2733998976 net.cpp:200] conv1 needs backward computation.
I1228 17:55:50.819026 2733998976 net.cpp:202] mnist does not need backward computation.
I1228 17:55:50.819031 2733998976 net.cpp:244] This network produces output loss
I1228 17:55:50.819036 2733998976 net.cpp:257] Network initialization done.
I1228 17:55:50.819180 2733998976 solver.cpp:190] Creating test net (#0) specified by net file: /usr/local/Cellar/caffe/examples/mnist/lenet_train_test.prototxt
I1228 17:55:50.819196 2733998976 net.cpp:296] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I1228 17:55:50.819205 2733998976 net.cpp:53] Initializing net from parameters: 
name: "LeNet"
state 
  phase: TEST

layer 
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include 
    phase: TEST
  
  transform_param 
    scale: 0.00390625
  
  data_param 
    source: "/usr/local/Cellar/caffe/examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  

layer 
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param 
    lr_mult: 1
  
  param 
    lr_mult: 2
  
  convolution_param 
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler 
      type: "xavier"
    
    bias_filler 
      type: "constant"
    
  

layer 
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param 
    pool: MAX
    kernel_size: 2
    stride: 2
  

layer 
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param 
    lr_mult: 1
  
  param 
    lr_mult: 2
  
  convolution_param 
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler 
      type: "xavier"
    
    bias_filler 
      type: "constant"
    
  

layer 
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param 
    pool: MAX
    kernel_size: 2
    stride: 2
  

layer 
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param 
    lr_mult: 1
  
  param 
    lr_mult: 2
  
  inner_product_param 
    num_output: 500
    weight_filler 
      type: "xavier"
    
    bias_filler 
      type: "constant"
    
  

layer 
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"

layer 
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param 
    lr_mult: 1
  
  param 
    lr_mult: 2
  
  inner_product_param 
    num_output: 10
    weight_filler 
      type: "xavier"
    
    bias_filler 
      type: "constant"
    
  

layer 
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include 
    phase: TEST
  

layer 
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"

I1228 17:55:50.819308 2733998976 layer_factory.hpp:77] Creating layer mnist
I1228 17:55:50.819391 2733998976 db_lmdb.cpp:35] Opened lmdb /usr/local/Cellar/caffe/examples/mnist/mnist_test_lmdb
I1228 17:55:50.819408 2733998976 net.cpp:86] Creating Layer mnist
I1228 17:55:50.819414 2733998976 net.cpp:382] mnist -> data
I1228 17:55:50.819420 2733998976 net.cpp:382] mnist -> label
I1228 17:55:50.819430 2733998976 data_layer.cpp:45] output data size: 100,1,28,28
I1228 17:55:50.820066 2733998976 net.cpp:124] Setting up mnist
I1228 17:55:50.820091 2733998976 net.cpp:131] Top shape: 100 1 28 28 (78400)
I1228 17:55:50.820097 2733998976 net.cpp:131] Top shape: 100 (100)
I1228 17:55:50.820101 2733998976 net.cpp:139] Memory required for data: 314000
I1228 17:55:50.820106 2733998976 layer_factory.hpp:77] Creating layer label_mnist_1_split
I1228 17:55:50.820112 2733998976 net.cpp:86] Creating Layer label_mnist_1_split
I1228 17:55:50.820116 2733998976 net.cpp:408] label_mnist_1_split <- label
I1228 17:55:50.820122 2733998976 net.cpp:382] label_mnist_1_split -> label_mnist_1_split_0
I1228 17:55:50.820128 2733998976 net.cpp:382] label_mnist_1_split -> label_mnist_1_split_1
I1228 17:55:50.820134 2733998976 net.cpp:124] Setting up label_mnist_1_split
I1228 17:55:50.820139 2733998976 net.cpp:131] Top shape: 100 (100)
I1228 17:55:50.820143 2733998976 net.cpp:131] Top shape: 100 (100)
I1228 17:55:50.820148 2733998976 net.cpp:139] Memory required for data: 314800
I1228 17:55:50.820152 2733998976 layer_factory.hpp:77] Creating layer conv1
I1228 17:55:50.820159 2733998976 net.cpp:86] Creating Layer conv1
I1228 17:55:50.820163 2733998976 net.cpp:408] conv1 <- data
I1228 17:55:50.820169 2733998976 net.cpp:382] conv1 -> conv1
I1228 17:55:50.820188 2733998976 net.cpp:124] Setting up conv1
I1228 17:55:50.820192 2733998976 net.cpp:131] Top shape: 100 20 24 24 (1152000)
I1228 17:55:50.820197 2733998976 net.cpp:139] Memory required for data: 4922800
I1228 17:55:50.820204 2733998976 layer_factory.hpp:77] Creating layer pool1
I1228 17:55:50.820210 2733998976 net.cpp:86] Creating Layer pool1
I1228 17:55:50.820214 2733998976 net.cpp:408] pool1 <- conv1
I1228 17:55:50.820222 2733998976 net.cpp:382] pool1 -> pool1
I1228 17:55:50.820230 2733998976 net.cpp:124] Setting up pool1
I1228 17:55:50.820235 2733998976 net.cpp:131] Top shape: 100 20 12 12 (288000)
I1228 17:55:50.820240 2733998976 net.cpp:139] Memory required for data: 6074800
I1228 17:55:50.820243 2733998976 layer_factory.hpp:77] Creating layer conv2
I1228 17:55:50.820250 2733998976 net.cpp:86] Creating Layer conv2
I1228 17:55:50.820255 2733998976 net.cpp:408] conv2 <- pool1
I1228 17:55:50.820261 2733998976 net.cpp:382] conv2 -> conv2
I1228 17:55:50.820474 2733998976 net.cpp:124] Setting up conv2
I1228 17:55:50.820480 2733998976 net.cpp:131] Top shape: 100 50 8 8 (320000)
I1228 17:55:50.820485 2733998976 net.cpp:139] Memory required for data: 7354800
I1228 17:55:50.820492 2733998976 layer_factory.hpp:77] Creating layer pool2
I1228 17:55:50.820497 2733998976 net.cpp:86] Creating Layer pool2
I1228 17:55:50.820647 2733998976 net.cpp:408] pool2 <- conv2
I1228 17:55:50.820652 2733998976 net.cpp:382] pool2 -> pool2
I1228 17:55:50.820659 2733998976 net.cpp:124] Setting up pool2
I1228 17:55:50.820663 2733998976 net.cpp:131] Top shape: 100 50 4 4 (80000)
I1228 17:55:50.820701 2733998976 net.cpp:139] Memory required for data: 7674800
I1228 17:55:50.820734 2733998976 layer_factory.hpp:77] Creating layer ip1
I1228 17:55:50.820765 2733998976 net.cpp:86] Creating Layer ip1
I1228 17:55:50.820768 2733998976 net.cpp:408] ip1 <- pool2
I1228 17:55:50.820775 2733998976 net.cpp:382] ip1 -> ip1
I1228 17:55:50.824555 2733998976 net.cpp:124] Setting up ip1
I1228 17:55:50.824573 2733998976 net.cpp:131] Top shape: 100 500 (50000)
I1228 17:55:50.824594 2733998976 net.cpp:139] Memory required for data: 7874800
I1228 17:55:50.824602 2733998976 layer_factory.hpp:77] Creating layer relu1
I1228 17:55:50.824621 2733998976 net.cpp:86] Creating Layer relu1
I1228 17:55:50.824627 2733998976 net.cpp:408] relu1 <- ip1
I1228 17:55:50.824632 2733998976 net.cpp:369] relu1 -> ip1 (in-place)
I1228 17:55:50.824640 2733998976 net.cpp:124] Setting up relu1
I1228 17:55:50.824643 2733998976 net.cpp:131] Top shape: 100 500 (50000)
I1228 17:55:50.824648 2733998976 net.cpp:139] Memory required for data: 8074800
I1228 17:55:50.824652 2733998976 layer_factory.hpp:77] Creating layer ip2
I1228 17:55:50.824679 2733998976 net.cpp:86] Creating Layer ip2
I1228 17:55:50.824683 2733998976 net.cpp:408] ip2 <- ip1
I1228 17:55:50.824689 2733998976 net.cpp:382] ip2 -> ip2
I1228 17:55:50.824791 2733998976 net.cpp:124] Setting up ip2
I1228 17:55:50.824796 2733998976 net.cpp:131] Top shape: 100 10 (1000)
I1228 17:55:50.824800 2733998976 net.cpp:139] Memory required for data: 8078800
I1228 17:55:50.824806 2733998976 layer_factory.hpp:77] Creating layer ip2_ip2_0_split
I1228 17:55:50.824812 2733998976 net.cpp:86] Creating Layer ip2_ip2_0_split
I1228 17:55:50.824816 2733998976 net.cpp:408] ip2_ip2_0_split <- ip2
I1228 17:55:50.824821 2733998976 net.cpp:382] ip2_ip2_0_split -> ip2_ip2_0_split_0
I1228 17:55:50.824827 2733998976 net.cpp:382] ip2_ip2_0_split -> ip2_ip2_0_split_1
I1228 17:55:50.824833 2733998976 net.cpp:124] Setting up ip2_ip2_0_split
I1228 17:55:50.824837 2733998976 net.cpp:131] Top shape: 100 10 (1000)
I1228 17:55:50.824842 2733998976 net.cpp:131] Top shape: 100 10 (1000)
I1228 17:55:50.824865 2733998976 net.cpp:139] Memory required for data: 8086800
I1228 17:55:50.824869 2733998976 layer_factory.hpp:77] Creating layer accuracy
I1228 17:55:50.824875 2733998976 net.cpp:86] Creating Layer accuracy
I1228 17:55:50.824880 2733998976 net.cpp:408] accuracy <- ip2_ip2_0_split_0
I1228 17:55:50.824885 2733998976 net.cpp:408] accuracy <- label_mnist_1_split_0
I1228 17:55:50.824903 2733998976 net.cpp:382] accuracy -> accuracy
I1228 17:55:50.824909 2733998976 net.cpp:124] Setting up accuracy
I1228 17:55:50.824913 2733998976 net.cpp:131] Top shape: (1)
I1228 17:55:50.824918 2733998976 net.cpp:139] Memory required for data: 8086804
I1228 17:55:50.824923 2733998976 layer_factory.hpp:77] Creating layer loss
I1228 17:55:50.824928 2733998976 net.cpp:86] Creating Layer loss
I1228 17:55:50.824931 2733998976 net.cpp:408] loss <- ip2_ip2_0_split_1
I1228 17:55:50.824936 2733998976 net.cpp:408] loss <- label_mnist_1_split_1
I1228 17:55:50.824941 2733998976 net.cpp:382] loss -> loss
I1228 17:55:50.824947 2733998976 layer_factory.hpp:77] Creating layer loss
I1228 17:55:50.824957 2733998976 net.cpp:124] Setting up loss
I1228 17:55:50.824962 2733998976 net.cpp:131] Top shape: (1)
I1228 17:55:50.824966 2733998976 net.cpp:134]     with loss weight 1
I1228 17:55:50.824973 2733998976 net.cpp:139] Memory required for data: 8086808
I1228 17:55:50.824977 2733998976 net.cpp:200] loss needs backward computation.
I1228 17:55:50.824982 2733998976 net.cpp:202] accuracy does not need backward computation.
I1228 17:55:50.824986 2733998976 net.cpp:200] ip2_ip2_0_split needs backward computation.
I1228 17:55:50.824990 2733998976 net.cpp:200] ip2 needs backward computation.
I1228 17:55:50.824995 2733998976 net.cpp:200] relu1 needs backward computation.
I1228 17:55:50.825000 2733998976 net.cpp:200] ip1 needs backward computation.
I1228 17:55:50.825003 2733998976 net.cpp:200] pool2 needs backward computation.
I1228 17:55:50.825008 2733998976 net.cpp:200] conv2 needs backward computation.
I1228 17:55:50.825012 2733998976 net.cpp:200] pool1 needs backward computation.
I1228 17:55:50.825016 2733998976 net.cpp:200] conv1 needs backward computation.
I1228 17:55:50.825021 2733998976 net.cpp:202] label_mnist_1_split does not need backward computation.
I1228 17:55:50.825026 2733998976 net.cpp:202] mnist does not need backward computation.
I1228 17:55:50.825029 2733998976 net.cpp:244] This network produces output accuracy
I1228 17:55:50.825034 2733998976 net.cpp:244] This network produces output loss
I1228 17:55:50.825042 2733998976 net.cpp:257] Network initialization done.
I1228 17:55:50.825074 2733998976 solver.cpp:57] Solver scaffolding done.
I1228 17:55:50.826243 2733998976 solver.cpp:347] Iteration 0, Testing net (#0)
I1228 17:55:53.022482 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:55:53.110131 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.0886
I1228 17:55:53.110157 2733998976 solver.cpp:414]     Test net output #1: loss = 2.39438 (* 1 = 2.39438 loss)
I1228 17:55:53.144232 2733998976 solver.cpp:239] Iteration 0 (0 iter/s, 2.319s/100 iters), loss = 2.39822
I1228 17:55:53.144258 2733998976 solver.cpp:258]     Train net output #0: loss = 2.39822 (* 1 = 2.39822 loss)
I1228 17:55:53.144269 2733998976 sgd_solver.cpp:112] Iteration 0, lr = 0.01
I1228 17:55:55.400316 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:55:58.461671 2733998976 solver.cpp:239] Iteration 100 (18.8076 iter/s, 5.317s/100 iters), loss = 0.199418
I1228 17:55:58.461697 2733998976 solver.cpp:258]     Train net output #0: loss = 0.199418 (* 1 = 0.199418 loss)
I1228 17:55:58.461704 2733998976 sgd_solver.cpp:112] Iteration 100, lr = 0.00992565
I1228 17:56:01.460424 2733998976 solver.cpp:239] Iteration 200 (33.3556 iter/s, 2.998s/100 iters), loss = 0.130344
I1228 17:56:01.460450 2733998976 solver.cpp:258]     Train net output #0: loss = 0.130344 (* 1 = 0.130344 loss)
I1228 17:56:01.460458 2733998976 sgd_solver.cpp:112] Iteration 200, lr = 0.00985258
I1228 17:56:04.451843 2733998976 solver.cpp:239] Iteration 300 (33.4336 iter/s, 2.991s/100 iters), loss = 0.196874
I1228 17:56:04.451869 2733998976 solver.cpp:258]     Train net output #0: loss = 0.196874 (* 1 = 0.196874 loss)
I1228 17:56:04.451877 2733998976 sgd_solver.cpp:112] Iteration 300, lr = 0.00978075
I1228 17:56:07.440696 2733998976 solver.cpp:239] Iteration 400 (33.4672 iter/s, 2.988s/100 iters), loss = 0.0869522
I1228 17:56:07.440726 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0869522 (* 1 = 0.0869522 loss)
I1228 17:56:07.440733 2733998976 sgd_solver.cpp:112] Iteration 400, lr = 0.00971013
I1228 17:56:10.421128 2733998976 solver.cpp:347] Iteration 500, Testing net (#0)
I1228 17:56:12.586988 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:56:12.675629 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.972
I1228 17:56:12.675654 2733998976 solver.cpp:414]     Test net output #1: loss = 0.086382 (* 1 = 0.086382 loss)
I1228 17:56:12.704449 2733998976 solver.cpp:239] Iteration 500 (19.0006 iter/s, 5.263s/100 iters), loss = 0.0954096
I1228 17:56:12.704476 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0954096 (* 1 = 0.0954096 loss)
I1228 17:56:12.704483 2733998976 sgd_solver.cpp:112] Iteration 500, lr = 0.00964069
I1228 17:56:14.965848 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:56:18.031318 2733998976 solver.cpp:239] Iteration 600 (18.7758 iter/s, 5.326s/100 iters), loss = 0.0956796
I1228 17:56:18.031345 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0956796 (* 1 = 0.0956796 loss)
I1228 17:56:18.031353 2733998976 sgd_solver.cpp:112] Iteration 600, lr = 0.0095724
I1228 17:56:21.054196 2733998976 solver.cpp:239] Iteration 700 (33.0907 iter/s, 3.022s/100 iters), loss = 0.119464
I1228 17:56:21.054222 2733998976 solver.cpp:258]     Train net output #0: loss = 0.119464 (* 1 = 0.119464 loss)
I1228 17:56:21.054229 2733998976 sgd_solver.cpp:112] Iteration 700, lr = 0.00950522
I1228 17:56:24.041611 2733998976 solver.cpp:239] Iteration 800 (33.4784 iter/s, 2.987s/100 iters), loss = 0.212201
I1228 17:56:24.041638 2733998976 solver.cpp:258]     Train net output #0: loss = 0.212201 (* 1 = 0.212201 loss)
I1228 17:56:24.041646 2733998976 sgd_solver.cpp:112] Iteration 800, lr = 0.00943913
I1228 17:56:27.032083 2733998976 solver.cpp:239] Iteration 900 (33.4448 iter/s, 2.99s/100 iters), loss = 0.154787
I1228 17:56:27.032121 2733998976 solver.cpp:258]     Train net output #0: loss = 0.154787 (* 1 = 0.154787 loss)
I1228 17:56:27.032128 2733998976 sgd_solver.cpp:112] Iteration 900, lr = 0.00937411
I1228 17:56:28.022493 210243584 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:56:30.003214 2733998976 solver.cpp:347] Iteration 1000, Testing net (#0)
I1228 17:56:32.152966 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:56:32.241582 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9823
I1228 17:56:32.241607 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0546377 (* 1 = 0.0546377 loss)
I1228 17:56:32.270887 2733998976 solver.cpp:239] Iteration 1000 (19.0913 iter/s, 5.238s/100 iters), loss = 0.053198
I1228 17:56:32.270915 2733998976 solver.cpp:258]     Train net output #0: loss = 0.053198 (* 1 = 0.053198 loss)
I1228 17:56:32.270922 2733998976 sgd_solver.cpp:112] Iteration 1000, lr = 0.00931012
I1228 17:56:34.504310 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:56:37.551901 2733998976 solver.cpp:239] Iteration 1100 (18.9394 iter/s, 5.28s/100 iters), loss = 0.00487837
I1228 17:56:37.551930 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00487837 (* 1 = 0.00487837 loss)
I1228 17:56:37.551937 2733998976 sgd_solver.cpp:112] Iteration 1100, lr = 0.00924715
I1228 17:56:40.550760 2733998976 solver.cpp:239] Iteration 1200 (33.3556 iter/s, 2.998s/100 iters), loss = 0.00953127
I1228 17:56:40.550787 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00953127 (* 1 = 0.00953127 loss)
I1228 17:56:40.550796 2733998976 sgd_solver.cpp:112] Iteration 1200, lr = 0.00918515
I1228 17:56:43.530740 2733998976 solver.cpp:239] Iteration 1300 (33.5683 iter/s, 2.979s/100 iters), loss = 0.0194495
I1228 17:56:43.530769 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0194495 (* 1 = 0.0194495 loss)
I1228 17:56:43.530776 2733998976 sgd_solver.cpp:112] Iteration 1300, lr = 0.00912412
I1228 17:56:46.519441 2733998976 solver.cpp:239] Iteration 1400 (33.4672 iter/s, 2.988s/100 iters), loss = 0.00659618
I1228 17:56:46.519477 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00659618 (* 1 = 0.00659618 loss)
I1228 17:56:46.519493 2733998976 sgd_solver.cpp:112] Iteration 1400, lr = 0.00906403
I1228 17:56:49.531745 2733998976 solver.cpp:347] Iteration 1500, Testing net (#0)
I1228 17:56:51.735550 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:56:51.827378 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9852
I1228 17:56:51.827402 2733998976 solver.cpp:414]     Test net output #1: loss = 0.046295 (* 1 = 0.046295 loss)
I1228 17:56:51.857616 2733998976 solver.cpp:239] Iteration 1500 (18.7336 iter/s, 5.338s/100 iters), loss = 0.0957371
I1228 17:56:51.857666 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0957371 (* 1 = 0.0957371 loss)
I1228 17:56:51.857674 2733998976 sgd_solver.cpp:112] Iteration 1500, lr = 0.00900485
I1228 17:56:54.138475 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:56:57.181025 2733998976 solver.cpp:239] Iteration 1600 (18.7864 iter/s, 5.323s/100 iters), loss = 0.129418
I1228 17:56:57.181051 2733998976 solver.cpp:258]     Train net output #0: loss = 0.129418 (* 1 = 0.129418 loss)
I1228 17:56:57.181057 2733998976 sgd_solver.cpp:112] Iteration 1600, lr = 0.00894657
I1228 17:57:00.391135 2733998976 solver.cpp:239] Iteration 1700 (31.1526 iter/s, 3.21s/100 iters), loss = 0.0193124
I1228 17:57:00.391162 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0193124 (* 1 = 0.0193124 loss)
I1228 17:57:00.391170 2733998976 sgd_solver.cpp:112] Iteration 1700, lr = 0.00888916
I1228 17:57:03.499020 2733998976 solver.cpp:239] Iteration 1800 (32.1854 iter/s, 3.107s/100 iters), loss = 0.0188734
I1228 17:57:03.499047 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0188734 (* 1 = 0.0188734 loss)
I1228 17:57:03.499055 2733998976 sgd_solver.cpp:112] Iteration 1800, lr = 0.0088326
I1228 17:57:05.743053 210243584 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:57:06.658465 2733998976 solver.cpp:239] Iteration 1900 (31.6556 iter/s, 3.159s/100 iters), loss = 0.110986
I1228 17:57:06.658493 2733998976 solver.cpp:258]     Train net output #0: loss = 0.110986 (* 1 = 0.110986 loss)
I1228 17:57:06.658500 2733998976 sgd_solver.cpp:112] Iteration 1900, lr = 0.00877687
I1228 17:57:09.826277 2733998976 solver.cpp:347] Iteration 2000, Testing net (#0)
I1228 17:57:12.128520 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:57:12.220306 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9862
I1228 17:57:12.220331 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0407267 (* 1 = 0.0407267 loss)
I1228 17:57:12.252471 2733998976 solver.cpp:239] Iteration 2000 (17.8795 iter/s, 5.593s/100 iters), loss = 0.0132918
I1228 17:57:12.252497 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0132918 (* 1 = 0.0132918 loss)
I1228 17:57:12.252504 2733998976 sgd_solver.cpp:112] Iteration 2000, lr = 0.00872196
I1228 17:57:14.704600 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:57:17.740623 2733998976 solver.cpp:239] Iteration 2100 (18.2216 iter/s, 5.488s/100 iters), loss = 0.0271667
I1228 17:57:17.740653 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0271667 (* 1 = 0.0271667 loss)
I1228 17:57:17.740664 2733998976 sgd_solver.cpp:112] Iteration 2100, lr = 0.00866784
I1228 17:57:20.737411 2733998976 solver.cpp:239] Iteration 2200 (33.3778 iter/s, 2.996s/100 iters), loss = 0.0226718
I1228 17:57:20.737437 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0226718 (* 1 = 0.0226718 loss)
I1228 17:57:20.737444 2733998976 sgd_solver.cpp:112] Iteration 2200, lr = 0.0086145
I1228 17:57:23.731586 2733998976 solver.cpp:239] Iteration 2300 (33.4001 iter/s, 2.994s/100 iters), loss = 0.105805
I1228 17:57:23.731614 2733998976 solver.cpp:258]     Train net output #0: loss = 0.105805 (* 1 = 0.105805 loss)
I1228 17:57:23.731621 2733998976 sgd_solver.cpp:112] Iteration 2300, lr = 0.00856192
I1228 17:57:26.741955 2733998976 solver.cpp:239] Iteration 2400 (33.2226 iter/s, 3.01s/100 iters), loss = 0.00785064
I1228 17:57:26.741982 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00785064 (* 1 = 0.00785064 loss)
I1228 17:57:26.741991 2733998976 sgd_solver.cpp:112] Iteration 2400, lr = 0.00851008
I1228 17:57:29.723547 2733998976 solver.cpp:347] Iteration 2500, Testing net (#0)
I1228 17:57:31.921429 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:57:32.011425 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9842
I1228 17:57:32.011451 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0485859 (* 1 = 0.0485859 loss)
I1228 17:57:32.044616 2733998976 solver.cpp:239] Iteration 2500 (18.8608 iter/s, 5.302s/100 iters), loss = 0.0214106
I1228 17:57:32.044641 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0214106 (* 1 = 0.0214106 loss)
I1228 17:57:32.044648 2733998976 sgd_solver.cpp:112] Iteration 2500, lr = 0.00845897
I1228 17:57:34.297142 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:57:37.362659 2733998976 solver.cpp:239] Iteration 2600 (18.8041 iter/s, 5.318s/100 iters), loss = 0.0625504
I1228 17:57:37.362687 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0625504 (* 1 = 0.0625504 loss)
I1228 17:57:37.362694 2733998976 sgd_solver.cpp:112] Iteration 2600, lr = 0.00840857
I1228 17:57:40.430872 2733998976 solver.cpp:239] Iteration 2700 (32.5945 iter/s, 3.068s/100 iters), loss = 0.0545578
I1228 17:57:40.430898 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0545578 (* 1 = 0.0545578 loss)
I1228 17:57:40.430907 2733998976 sgd_solver.cpp:112] Iteration 2700, lr = 0.00835886
I1228 17:57:43.639781 2733998976 solver.cpp:239] Iteration 2800 (31.1721 iter/s, 3.208s/100 iters), loss = 0.00138936
I1228 17:57:43.639809 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00138936 (* 1 = 0.00138936 loss)
I1228 17:57:43.639816 2733998976 sgd_solver.cpp:112] Iteration 2800, lr = 0.00830984
I1228 17:57:43.869724 210243584 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:57:46.593061 2733998976 solver.cpp:239] Iteration 2900 (33.8639 iter/s, 2.953s/100 iters), loss = 0.0231533
I1228 17:57:46.593088 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0231533 (* 1 = 0.0231533 loss)
I1228 17:57:46.593096 2733998976 sgd_solver.cpp:112] Iteration 2900, lr = 0.00826148
I1228 17:57:49.484295 2733998976 solver.cpp:347] Iteration 3000, Testing net (#0)
I1228 17:57:51.664630 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:57:51.754335 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9871
I1228 17:57:51.754359 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0406435 (* 1 = 0.0406435 loss)
I1228 17:57:51.781070 2733998976 solver.cpp:239] Iteration 3000 (19.279 iter/s, 5.187s/100 iters), loss = 0.0197396
I1228 17:57:51.781095 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0197396 (* 1 = 0.0197396 loss)
I1228 17:57:51.781102 2733998976 sgd_solver.cpp:112] Iteration 3000, lr = 0.00821377
I1228 17:57:54.068579 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:57:57.061866 2733998976 solver.cpp:239] Iteration 3100 (18.9394 iter/s, 5.28s/100 iters), loss = 0.0151984
I1228 17:57:57.061902 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0151984 (* 1 = 0.0151984 loss)
I1228 17:57:57.061918 2733998976 sgd_solver.cpp:112] Iteration 3100, lr = 0.0081667
I1228 17:58:00.250874 2733998976 solver.cpp:239] Iteration 3200 (31.3676 iter/s, 3.188s/100 iters), loss = 0.00594261
I1228 17:58:00.250900 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00594261 (* 1 = 0.00594261 loss)
I1228 17:58:00.250908 2733998976 sgd_solver.cpp:112] Iteration 3200, lr = 0.00812025
I1228 17:58:03.203016 2733998976 solver.cpp:239] Iteration 3300 (33.8753 iter/s, 2.952s/100 iters), loss = 0.0419102
I1228 17:58:03.203045 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0419102 (* 1 = 0.0419102 loss)
I1228 17:58:03.203053 2733998976 sgd_solver.cpp:112] Iteration 3300, lr = 0.00807442
I1228 17:58:06.174073 2733998976 solver.cpp:239] Iteration 3400 (33.6587 iter/s, 2.971s/100 iters), loss = 0.00628851
I1228 17:58:06.174099 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00628851 (* 1 = 0.00628851 loss)
I1228 17:58:06.174108 2733998976 sgd_solver.cpp:112] Iteration 3400, lr = 0.00802918
I1228 17:58:09.104461 2733998976 solver.cpp:347] Iteration 3500, Testing net (#0)
I1228 17:58:11.270108 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:58:11.359313 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9867
I1228 17:58:11.359341 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0408062 (* 1 = 0.0408062 loss)
I1228 17:58:11.386420 2733998976 solver.cpp:239] Iteration 3500 (19.1865 iter/s, 5.212s/100 iters), loss = 0.00765101
I1228 17:58:11.386447 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00765101 (* 1 = 0.00765101 loss)
I1228 17:58:11.386454 2733998976 sgd_solver.cpp:112] Iteration 3500, lr = 0.00798454
I1228 17:58:13.678453 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:58:16.771239 2733998976 solver.cpp:239] Iteration 3600 (18.5736 iter/s, 5.384s/100 iters), loss = 0.0324877
I1228 17:58:16.771266 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0324877 (* 1 = 0.0324877 loss)
I1228 17:58:16.771273 2733998976 sgd_solver.cpp:112] Iteration 3600, lr = 0.00794046
I1228 17:58:19.771371 2733998976 solver.cpp:239] Iteration 3700 (33.3333 iter/s, 3s/100 iters), loss = 0.0184573
I1228 17:58:19.771399 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0184573 (* 1 = 0.0184573 loss)
I1228 17:58:19.771406 2733998976 sgd_solver.cpp:112] Iteration 3700, lr = 0.00789695
I1228 17:58:21.106961 210243584 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:58:22.749763 2733998976 solver.cpp:239] Iteration 3800 (33.5796 iter/s, 2.978s/100 iters), loss = 0.0156665
I1228 17:58:22.749791 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0156665 (* 1 = 0.0156665 loss)
I1228 17:58:22.749799 2733998976 sgd_solver.cpp:112] Iteration 3800, lr = 0.007854
I1228 17:58:25.847599 2733998976 solver.cpp:239] Iteration 3900 (32.2893 iter/s, 3.097s/100 iters), loss = 0.0366668
I1228 17:58:25.847625 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0366668 (* 1 = 0.0366668 loss)
I1228 17:58:25.847632 2733998976 sgd_solver.cpp:112] Iteration 3900, lr = 0.00781158
I1228 17:58:28.841053 2733998976 solver.cpp:347] Iteration 4000, Testing net (#0)
I1228 17:58:31.186780 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:58:31.278307 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9899
I1228 17:58:31.278332 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0315919 (* 1 = 0.0315919 loss)
I1228 17:58:31.307127 2733998976 solver.cpp:239] Iteration 4000 (18.3184 iter/s, 5.459s/100 iters), loss = 0.0191784
I1228 17:58:31.307155 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0191784 (* 1 = 0.0191784 loss)
I1228 17:58:31.307163 2733998976 sgd_solver.cpp:112] Iteration 4000, lr = 0.00776969
I1228 17:58:33.665038 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:58:37.257160 2733998976 solver.cpp:239] Iteration 4100 (16.8067 iter/s, 5.95s/100 iters), loss = 0.0143265
I1228 17:58:37.257194 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0143265 (* 1 = 0.0143265 loss)
I1228 17:58:37.257203 2733998976 sgd_solver.cpp:112] Iteration 4100, lr = 0.00772833
I1228 17:58:40.578187 2733998976 solver.cpp:239] Iteration 4200 (30.1205 iter/s, 3.32s/100 iters), loss = 0.0118002
I1228 17:58:40.578214 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0118002 (* 1 = 0.0118002 loss)
I1228 17:58:40.578222 2733998976 sgd_solver.cpp:112] Iteration 4200, lr = 0.00768748
I1228 17:58:43.781576 2733998976 solver.cpp:239] Iteration 4300 (31.2207 iter/s, 3.203s/100 iters), loss = 0.0523296
I1228 17:58:43.781610 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0523296 (* 1 = 0.0523296 loss)
I1228 17:58:43.781620 2733998976 sgd_solver.cpp:112] Iteration 4300, lr = 0.00764712
I1228 17:58:47.871171 2733998976 solver.cpp:239] Iteration 4400 (24.4559 iter/s, 4.089s/100 iters), loss = 0.0258025
I1228 17:58:47.871197 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0258025 (* 1 = 0.0258025 loss)
I1228 17:58:47.871206 2733998976 sgd_solver.cpp:112] Iteration 4400, lr = 0.00760726
I1228 17:58:51.312611 2733998976 solver.cpp:347] Iteration 4500, Testing net (#0)
I1228 17:58:53.480944 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:58:53.568606 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9896
I1228 17:58:53.568632 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0343229 (* 1 = 0.0343229 loss)
I1228 17:58:53.596662 2733998976 solver.cpp:239] Iteration 4500 (17.4672 iter/s, 5.725s/100 iters), loss = 0.00411332
I1228 17:58:53.596688 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00411332 (* 1 = 0.00411332 loss)
I1228 17:58:53.596695 2733998976 sgd_solver.cpp:112] Iteration 4500, lr = 0.00756788
I1228 17:58:55.893582 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:58:59.003871 2733998976 solver.cpp:239] Iteration 4600 (18.4945 iter/s, 5.407s/100 iters), loss = 0.0100597
I1228 17:58:59.003899 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0100597 (* 1 = 0.0100597 loss)
I1228 17:58:59.003907 2733998976 sgd_solver.cpp:112] Iteration 4600, lr = 0.00752897
I1228 17:59:01.549305 210243584 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:59:02.282960 2733998976 solver.cpp:239] Iteration 4700 (30.4971 iter/s, 3.279s/100 iters), loss = 0.00317848
I1228 17:59:02.283004 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00317848 (* 1 = 0.00317848 loss)
I1228 17:59:02.283015 2733998976 sgd_solver.cpp:112] Iteration 4700, lr = 0.00749052
I1228 17:59:05.461637 2733998976 solver.cpp:239] Iteration 4800 (31.4663 iter/s, 3.178s/100 iters), loss = 0.015481
I1228 17:59:05.461663 2733998976 solver.cpp:258]     Train net output #0: loss = 0.015481 (* 1 = 0.015481 loss)
I1228 17:59:05.461670 2733998976 sgd_solver.cpp:112] Iteration 4800, lr = 0.00745253
I1228 17:59:08.421372 2733998976 solver.cpp:239] Iteration 4900 (33.7952 iter/s, 2.959s/100 iters), loss = 0.0076528
I1228 17:59:08.421401 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0076528 (* 1 = 0.0076528 loss)
I1228 17:59:08.421408 2733998976 sgd_solver.cpp:112] Iteration 4900, lr = 0.00741498
I1228 17:59:11.388818 2733998976 solver.cpp:464] Snapshotting to binary proto file /Users/taily/pycharmproj/lenet_solver_test_show_iter_5000.caffemodel
I1228 17:59:11.393275 2733998976 sgd_solver.cpp:284] Snapshotting solver state to binary proto file /Users/taily/pycharmproj/lenet_solver_test_show_iter_5000.solverstate
I1228 17:59:11.394623 2733998976 solver.cpp:347] Iteration 5000, Testing net (#0)
I1228 17:59:13.615495 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:59:13.703135 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9897
I1228 17:59:13.703161 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0317835 (* 1 = 0.0317835 loss)
I1228 17:59:13.731845 2733998976 solver.cpp:239] Iteration 5000 (18.8324 iter/s, 5.31s/100 iters), loss = 0.0358031
I1228 17:59:13.731875 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0358031 (* 1 = 0.0358031 loss)
I1228 17:59:13.731883 2733998976 sgd_solver.cpp:112] Iteration 5000, lr = 0.00737788
I1228 17:59:16.064182 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:59:19.139829 2733998976 solver.cpp:239] Iteration 5100 (18.4945 iter/s, 5.407s/100 iters), loss = 0.017382
I1228 17:59:19.139858 2733998976 solver.cpp:258]     Train net output #0: loss = 0.017382 (* 1 = 0.017382 loss)
I1228 17:59:19.139865 2733998976 sgd_solver.cpp:112] Iteration 5100, lr = 0.0073412
I1228 17:59:23.241438 2733998976 solver.cpp:239] Iteration 5200 (24.3843 iter/s, 4.101s/100 iters), loss = 0.0074814
I1228 17:59:23.241466 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0074814 (* 1 = 0.0074814 loss)
I1228 17:59:23.241473 2733998976 sgd_solver.cpp:112] Iteration 5200, lr = 0.00730495
I1228 17:59:26.214295 2733998976 solver.cpp:239] Iteration 5300 (33.6474 iter/s, 2.972s/100 iters), loss = 0.00124816
I1228 17:59:26.214323 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00124816 (* 1 = 0.00124816 loss)
I1228 17:59:26.214330 2733998976 sgd_solver.cpp:112] Iteration 5300, lr = 0.00726911
I1228 17:59:29.230228 2733998976 solver.cpp:239] Iteration 5400 (33.1675 iter/s, 3.015s/100 iters), loss = 0.00897871
I1228 17:59:29.230257 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00897871 (* 1 = 0.00897871 loss)
I1228 17:59:29.230263 2733998976 sgd_solver.cpp:112] Iteration 5400, lr = 0.00723368
I1228 17:59:32.210072 2733998976 solver.cpp:347] Iteration 5500, Testing net (#0)
I1228 17:59:34.675516 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:59:34.809540 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.988
I1228 17:59:34.809629 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0347851 (* 1 = 0.0347851 loss)
I1228 17:59:34.855731 2733998976 solver.cpp:239] Iteration 5500 (17.7778 iter/s, 5.625s/100 iters), loss = 0.00980196
I1228 17:59:34.855765 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00980196 (* 1 = 0.00980196 loss)
I1228 17:59:34.855774 2733998976 sgd_solver.cpp:112] Iteration 5500, lr = 0.00719865
I1228 17:59:37.256271 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:59:40.275497 2733998976 solver.cpp:239] Iteration 5600 (18.4536 iter/s, 5.419s/100 iters), loss = 0.000928452
I1228 17:59:40.275527 2733998976 solver.cpp:258]     Train net output #0: loss = 0.000928452 (* 1 = 0.000928452 loss)
I1228 17:59:40.275533 2733998976 sgd_solver.cpp:112] Iteration 5600, lr = 0.00716402
I1228 17:59:40.876044 210243584 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:59:43.253707 2733998976 solver.cpp:239] Iteration 5700 (33.5796 iter/s, 2.978s/100 iters), loss = 0.0043696
I1228 17:59:43.253737 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0043696 (* 1 = 0.0043696 loss)
I1228 17:59:43.253746 2733998976 sgd_solver.cpp:112] Iteration 5700, lr = 0.00712977
I1228 17:59:46.250402 2733998976 solver.cpp:239] Iteration 5800 (33.3778 iter/s, 2.996s/100 iters), loss = 0.0301731
I1228 17:59:46.250432 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0301731 (* 1 = 0.0301731 loss)
I1228 17:59:46.250439 2733998976 sgd_solver.cpp:112] Iteration 5800, lr = 0.0070959
I1228 17:59:49.302268 2733998976 solver.cpp:239] Iteration 5900 (32.7761 iter/s, 3.051s/100 iters), loss = 0.00737168
I1228 17:59:49.302371 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00737168 (* 1 = 0.00737168 loss)
I1228 17:59:49.302392 2733998976 sgd_solver.cpp:112] Iteration 5900, lr = 0.0070624
I1228 17:59:52.800864 2733998976 solver.cpp:347] Iteration 6000, Testing net (#0)
I1228 17:59:55.019049 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 17:59:55.107859 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9903
I1228 17:59:55.107934 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0295323 (* 1 = 0.0295323 loss)
I1228 17:59:55.136704 2733998976 solver.cpp:239] Iteration 6000 (17.1409 iter/s, 5.834s/100 iters), loss = 0.00375669
I1228 17:59:55.136732 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00375669 (* 1 = 0.00375669 loss)
I1228 17:59:55.136740 2733998976 sgd_solver.cpp:112] Iteration 6000, lr = 0.00702927
I1228 17:59:57.629866 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:00:00.632270 2733998976 solver.cpp:239] Iteration 6100 (18.1984 iter/s, 5.495s/100 iters), loss = 0.00206975
I1228 18:00:00.632297 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00206975 (* 1 = 0.00206975 loss)
I1228 18:00:00.632303 2733998976 sgd_solver.cpp:112] Iteration 6100, lr = 0.0069965
I1228 18:00:03.699493 2733998976 solver.cpp:239] Iteration 6200 (32.6052 iter/s, 3.067s/100 iters), loss = 0.0100554
I1228 18:00:03.699522 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0100554 (* 1 = 0.0100554 loss)
I1228 18:00:03.699530 2733998976 sgd_solver.cpp:112] Iteration 6200, lr = 0.00696408
I1228 18:00:06.682682 2733998976 solver.cpp:239] Iteration 6300 (33.5233 iter/s, 2.983s/100 iters), loss = 0.00990718
I1228 18:00:06.682710 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00990718 (* 1 = 0.00990718 loss)
I1228 18:00:06.682718 2733998976 sgd_solver.cpp:112] Iteration 6300, lr = 0.00693201
I1228 18:00:10.203083 2733998976 solver.cpp:239] Iteration 6400 (28.4091 iter/s, 3.52s/100 iters), loss = 0.00909858
I1228 18:00:10.203112 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00909858 (* 1 = 0.00909858 loss)
I1228 18:00:10.203119 2733998976 sgd_solver.cpp:112] Iteration 6400, lr = 0.00690029
I1228 18:00:13.111776 2733998976 solver.cpp:347] Iteration 6500, Testing net (#0)
I1228 18:00:15.288513 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:00:15.375962 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9894
I1228 18:00:15.375988 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0316598 (* 1 = 0.0316598 loss)
I1228 18:00:15.404659 2733998976 solver.cpp:239] Iteration 6500 (19.2271 iter/s, 5.201s/100 iters), loss = 0.0130249
I1228 18:00:15.404686 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0130249 (* 1 = 0.0130249 loss)
I1228 18:00:15.404693 2733998976 sgd_solver.cpp:112] Iteration 6500, lr = 0.0068689
I1228 18:00:17.712810 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:00:19.522375 210243584 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:00:20.790387 2733998976 solver.cpp:239] Iteration 6600 (18.5701 iter/s, 5.385s/100 iters), loss = 0.0314408
I1228 18:00:20.790416 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0314408 (* 1 = 0.0314408 loss)
I1228 18:00:20.790424 2733998976 sgd_solver.cpp:112] Iteration 6600, lr = 0.00683784
I1228 18:00:23.775645 2733998976 solver.cpp:239] Iteration 6700 (33.5008 iter/s, 2.985s/100 iters), loss = 0.00651566
I1228 18:00:23.775673 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00651566 (* 1 = 0.00651566 loss)
I1228 18:00:23.775681 2733998976 sgd_solver.cpp:112] Iteration 6700, lr = 0.00680711
I1228 18:00:26.771030 2733998976 solver.cpp:239] Iteration 6800 (33.389 iter/s, 2.995s/100 iters), loss = 0.00297121
I1228 18:00:26.771059 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00297121 (* 1 = 0.00297121 loss)
I1228 18:00:26.771066 2733998976 sgd_solver.cpp:112] Iteration 6800, lr = 0.0067767
I1228 18:00:29.782070 2733998976 solver.cpp:239] Iteration 6900 (33.2116 iter/s, 3.011s/100 iters), loss = 0.00802709
I1228 18:00:29.782099 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00802709 (* 1 = 0.00802709 loss)
I1228 18:00:29.782107 2733998976 sgd_solver.cpp:112] Iteration 6900, lr = 0.0067466
I1228 18:00:32.752030 2733998976 solver.cpp:347] Iteration 7000, Testing net (#0)
I1228 18:00:34.922893 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:00:35.012979 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9898
I1228 18:00:35.013005 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0291224 (* 1 = 0.0291224 loss)
I1228 18:00:35.041568 2733998976 solver.cpp:239] Iteration 7000 (19.015 iter/s, 5.259s/100 iters), loss = 0.00646155
I1228 18:00:35.041594 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00646155 (* 1 = 0.00646155 loss)
I1228 18:00:35.041602 2733998976 sgd_solver.cpp:112] Iteration 7000, lr = 0.00671681
I1228 18:00:37.286623 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:00:40.305764 2733998976 solver.cpp:239] Iteration 7100 (18.997 iter/s, 5.264s/100 iters), loss = 0.00939924
I1228 18:00:40.305794 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00939924 (* 1 = 0.00939924 loss)
I1228 18:00:40.305802 2733998976 sgd_solver.cpp:112] Iteration 7100, lr = 0.00668733
I1228 18:00:43.588579 2733998976 solver.cpp:239] Iteration 7200 (30.4692 iter/s, 3.282s/100 iters), loss = 0.00617863
I1228 18:00:43.588609 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00617863 (* 1 = 0.00617863 loss)
I1228 18:00:43.588618 2733998976 sgd_solver.cpp:112] Iteration 7200, lr = 0.00665815
I1228 18:00:46.674247 2733998976 solver.cpp:239] Iteration 7300 (32.4149 iter/s, 3.085s/100 iters), loss = 0.0201554
I1228 18:00:46.674273 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0201554 (* 1 = 0.0201554 loss)
I1228 18:00:46.674281 2733998976 sgd_solver.cpp:112] Iteration 7300, lr = 0.00662927
I1228 18:00:49.679594 2733998976 solver.cpp:239] Iteration 7400 (33.2779 iter/s, 3.005s/100 iters), loss = 0.00633155
I1228 18:00:49.679622 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00633155 (* 1 = 0.00633155 loss)
I1228 18:00:49.679630 2733998976 sgd_solver.cpp:112] Iteration 7400, lr = 0.00660067
I1228 18:00:52.749370 210243584 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:00:52.866267 2733998976 solver.cpp:347] Iteration 7500, Testing net (#0)
I1228 18:00:55.053150 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:00:55.141665 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9894
I1228 18:00:55.141692 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0324159 (* 1 = 0.0324159 loss)
I1228 18:00:55.171073 2733998976 solver.cpp:239] Iteration 7500 (18.2116 iter/s, 5.491s/100 iters), loss = 0.00143481
I1228 18:00:55.171102 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00143481 (* 1 = 0.00143481 loss)
I1228 18:00:55.171108 2733998976 sgd_solver.cpp:112] Iteration 7500, lr = 0.00657236
I1228 18:00:57.504627 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:01:00.629341 2733998976 solver.cpp:239] Iteration 7600 (18.3217 iter/s, 5.458s/100 iters), loss = 0.00504313
I1228 18:01:00.629370 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00504313 (* 1 = 0.00504313 loss)
I1228 18:01:00.629377 2733998976 sgd_solver.cpp:112] Iteration 7600, lr = 0.00654433
I1228 18:01:03.611757 2733998976 solver.cpp:239] Iteration 7700 (33.5345 iter/s, 2.982s/100 iters), loss = 0.0370426
I1228 18:01:03.611785 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0370426 (* 1 = 0.0370426 loss)
I1228 18:01:03.611793 2733998976 sgd_solver.cpp:112] Iteration 7700, lr = 0.00651658
I1228 18:01:06.663183 2733998976 solver.cpp:239] Iteration 7800 (32.7761 iter/s, 3.051s/100 iters), loss = 0.00482317
I1228 18:01:06.663211 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00482317 (* 1 = 0.00482317 loss)
I1228 18:01:06.663219 2733998976 sgd_solver.cpp:112] Iteration 7800, lr = 0.00648911
I1228 18:01:09.659461 2733998976 solver.cpp:239] Iteration 7900 (33.3778 iter/s, 2.996s/100 iters), loss = 0.00459809
I1228 18:01:09.659488 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00459809 (* 1 = 0.00459809 loss)
I1228 18:01:09.659497 2733998976 sgd_solver.cpp:112] Iteration 7900, lr = 0.0064619
I1228 18:01:12.696130 2733998976 solver.cpp:347] Iteration 8000, Testing net (#0)
I1228 18:01:14.910470 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:01:14.999799 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9903
I1228 18:01:14.999821 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0296206 (* 1 = 0.0296206 loss)
I1228 18:01:15.030339 2733998976 solver.cpp:239] Iteration 8000 (18.622 iter/s, 5.37s/100 iters), loss = 0.00708911
I1228 18:01:15.030365 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00708911 (* 1 = 0.00708911 loss)
I1228 18:01:15.030373 2733998976 sgd_solver.cpp:112] Iteration 8000, lr = 0.00643496
I1228 18:01:17.349162 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:01:20.411386 2733998976 solver.cpp:239] Iteration 8100 (18.5839 iter/s, 5.381s/100 iters), loss = 0.0157724
I1228 18:01:20.411415 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0157724 (* 1 = 0.0157724 loss)
I1228 18:01:20.411423 2733998976 sgd_solver.cpp:112] Iteration 8100, lr = 0.00640827
I1228 18:01:23.374837 2733998976 solver.cpp:239] Iteration 8200 (33.7496 iter/s, 2.963s/100 iters), loss = 0.00885533
I1228 18:01:23.374866 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00885533 (* 1 = 0.00885533 loss)
I1228 18:01:23.374873 2733998976 sgd_solver.cpp:112] Iteration 8200, lr = 0.00638185
I1228 18:01:26.310763 2733998976 solver.cpp:239] Iteration 8300 (34.0716 iter/s, 2.935s/100 iters), loss = 0.0312884
I1228 18:01:26.310792 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0312884 (* 1 = 0.0312884 loss)
I1228 18:01:26.310801 2733998976 sgd_solver.cpp:112] Iteration 8300, lr = 0.00635568
I1228 18:01:29.262581 2733998976 solver.cpp:239] Iteration 8400 (33.8868 iter/s, 2.951s/100 iters), loss = 0.00718827
I1228 18:01:29.262609 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00718827 (* 1 = 0.00718827 loss)
I1228 18:01:29.262615 2733998976 sgd_solver.cpp:112] Iteration 8400, lr = 0.00632975
I1228 18:01:30.248950 210243584 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:01:32.192843 2733998976 solver.cpp:347] Iteration 8500, Testing net (#0)
I1228 18:01:34.367149 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:01:34.458750 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9908
I1228 18:01:34.458778 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0294309 (* 1 = 0.0294309 loss)
I1228 18:01:34.487138 2733998976 solver.cpp:239] Iteration 8500 (19.1424 iter/s, 5.224s/100 iters), loss = 0.0072237
I1228 18:01:34.487166 2733998976 solver.cpp:258]     Train net output #0: loss = 0.0072237 (* 1 = 0.0072237 loss)
I1228 18:01:34.487175 2733998976 sgd_solver.cpp:112] Iteration 8500, lr = 0.00630407
I1228 18:01:36.752040 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:01:39.947311 2733998976 solver.cpp:239] Iteration 8600 (18.315 iter/s, 5.46s/100 iters), loss = 0.000905046
I1228 18:01:39.947340 2733998976 solver.cpp:258]     Train net output #0: loss = 0.000905046 (* 1 = 0.000905046 loss)
I1228 18:01:39.947347 2733998976 sgd_solver.cpp:112] Iteration 8600, lr = 0.00627864
I1228 18:01:42.910595 2733998976 solver.cpp:239] Iteration 8700 (33.7496 iter/s, 2.963s/100 iters), loss = 0.00216248
I1228 18:01:42.910624 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00216248 (* 1 = 0.00216248 loss)
I1228 18:01:42.910630 2733998976 sgd_solver.cpp:112] Iteration 8700, lr = 0.00625344
I1228 18:01:46.290315 2733998976 solver.cpp:239] Iteration 8800 (29.5946 iter/s, 3.379s/100 iters), loss = 0.00139346
I1228 18:01:46.290344 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00139346 (* 1 = 0.00139346 loss)
I1228 18:01:46.290354 2733998976 sgd_solver.cpp:112] Iteration 8800, lr = 0.00622847
I1228 18:01:49.550997 2733998976 solver.cpp:239] Iteration 8900 (30.6748 iter/s, 3.26s/100 iters), loss = 0.00137645
I1228 18:01:49.551023 2733998976 solver.cpp:258]     Train net output #0: loss = 0.00137645 (* 1 = 0.00137645 loss)
I1228 18:01:49.551031 2733998976 sgd_solver.cpp:112] Iteration 8900, lr = 0.00620374
I1228 18:01:52.662079 2733998976 solver.cpp:347] Iteration 9000, Testing net (#0)
I1228 18:01:54.892765 210780160 data_layer.cpp:73] Restarting data prefetching from start.
I1228 18:01:54.987535 2733998976 solver.cpp:414]     Test net output #0: accuracy = 0.9894
I1228 18:01:54.987560 2733998976 solver.cpp:414]     Test net output #1: loss = 0.0305774 (* 1 = 0.0305774 loss)
I1228 18:01:55.016005 2733998976 solver.cpp:239] Iteration 9000 (18.3016 iter/s, 5.464s/100 iters), loss = 0.0261417
I1228 18:01:5

以上是关于Caffecaffe可视化训练过程实操的主要内容,如果未能解决你的问题,请参考以下文章

CaffeCaffe版MobileNet实操

caffecaffe中网络层含义

YOLOv8详解 网络结构+代码+实操

notebook 显示图片训练过程可视化

notebook 显示图片训练过程可视化

@开发者,满腹经纶却实操乏力?轻量级应用构建训练营带你成王者