解析Caffe框架的prototxt模型文件

Posted 夏小悠

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了解析Caffe框架的prototxt模型文件相关的知识,希望对你有一定的参考价值。

文档目录

前言

  Caffe全称为Convolutional Architecture for Fast Feature Embedding,是一个应用广泛的开源深度学习框架,在TensorFlow出现之前一直是深度学习领域GitHub star最多的项目,由伯克利视觉学中心(Berkeley Vision and Learning Center,BVLC)进行维护。核心编程语言为C++,同时提供了pythonmatlab接口。
  本篇博客主要介绍Caffe的模型定义文件.prototxt及如何将Caffe的模型权重文件.caffemodel转为PyTorch.pth

1. 解析示例

2. 层解析

# this is model name
# cifar10_full.prototxt
name: "CIFAR10_full_deploy"
# N.B. input image must be in CIFAR-10 format
# as described at http://www.cs.toronto.edu/~kriz/cifar.html

# 输入层
layer 
  name: "data"		# 表示该层的名字为 data
  type: "Input"		# 表示该层的类型为 Input
  top: "data"		# 表示输出结果保存到 data 层中, 也就是本层
  input_param  	# 表示输入参数
  	shape:  		# 表示输入 shape 为 (1,3,32,32)
    	dim: 1 
        dim: 3 
        dim: 32 
        dim: 32 
     
  


# 卷积层
layer 
  name: "conv1"			# 表示该层的名字为 data
  type: "Convolution"	# 表示该层的类型为 Convolution
  bottom: "data"		# 表示输入来自 data 层
  top: "conv1"			# 表示输出保存到 conv1 层
  param 
    lr_mult: 1			# 表示本层参数的学习率需要乘上的一个系数
  
  param 
    lr_mult: 2
  
  convolution_param 
    num_output: 32		# 表示本层输出 dim 为 32
    pad: 2				# 表示卷积核的 padding 为 2, kernel-size 为 5, stride 为 1
    kernel_size: 5
    stride: 1
  


# 池化层
layer 
  name: "pool1"		# 表示该层的名字为 pool1
  type: "Pooling"	# 表示该层的类型为 Pooling
  bottom: "conv1"	# 表示输入来自 conv1 层
  top: "pool1"		# 表示输出保存到 pool1 层
  pooling_param 
    pool: MAX		# 表示最大池化
    kernel_size: 3	# 表示 kernel-size 为 3, stride 为 2
    stride: 2
  


# 激活层
layer 
  name: "relu1"		# 表示该层的名字为 relu1
  type: "ReLU"		# 表示该层的类型为 ReLU
  bottom: "pool1"	# 表示输入来自 pool1 层
  top: "pool1"		# 表示输出保存到 pool1 层


# 归一层
layer 
  name: "norm1"		# 表示该层的名字为 norm1
  type: "LRN"		# 表示该层的类型为 LRN
  bottom: "pool1"	# 表示输入来自 pool1 层
  top: "norm1"		# 表示输出保存到 norm1 层
  lrn_param 
    local_size: 3
    alpha: 5e-05
    beta: 0.75
    norm_region: WITHIN_CHANNEL
  


...

layer 
  name: "ip1"			# 表示该层的名字为 ip1
  type: "InnerProduct"	# 表示该层的类型为 InnerProduct
  bottom: "pool3"		# 表示输入来自 pool3 层
  top: "ip1"			# 表示输出保存到 ip1 层
  param 
    lr_mult: 1
    decay_mult: 250
  
  param 
    lr_mult: 2
    decay_mult: 0
  
  inner_product_param 
    num_output: 10
  


layer 
  name: "prob"		# 表示该层的名字为 prob
  type: "Softmax"	# 表示该层的类型为 Softmax
  bottom: "ip1"		# 表示输入来自 ip1 层
  top: "prob"		# 表示输出保存到 prob 层

3. 完整文件

# cifar10_full.prototxt
name: "CIFAR10_full_deploy"
# N.B. input image must be in CIFAR-10 format
# as described at http://www.cs.toronto.edu/~kriz/cifar.html
layer 
  name: "data"
  type: "Input"
  top: "data"
  input_param  shape:  dim: 1 dim: 3 dim: 32 dim: 32  

layer 
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param 
    lr_mult: 1
  
  param 
    lr_mult: 2
  
  convolution_param 
    num_output: 32
    pad: 2
    kernel_size: 5
    stride: 1
  

layer 
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param 
    pool: MAX
    kernel_size: 3
    stride: 2
  

layer 
  name: "relu1"
  type: "ReLU"
  bottom: "pool1"
  top: "pool1"

layer 
  name: "norm1"
  type: "LRN"
  bottom: "pool1"
  top: "norm1"
  lrn_param 
    local_size: 3
    alpha: 5e-05
    beta: 0.75
    norm_region: WITHIN_CHANNEL
  

layer 
  name: "conv2"
  type: "Convolution"
  bottom: "norm1"
  top: "conv2"
  param 
    lr_mult: 1
  
  param 
    lr_mult: 2
  
  convolution_param 
    num_output: 32
    pad: 2
    kernel_size: 5
    stride: 1
  

layer 
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"

layer 
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param 
    pool: AVE
    kernel_size: 3
    stride: 2
  

layer 
  name: "norm2"
  type: "LRN"
  bottom: "pool2"
  top: "norm2"
  lrn_param 
    local_size: 3
    alpha: 5e-05
    beta: 0.75
    norm_region: WITHIN_CHANNEL
  

layer 
  name: "conv3"
  type: "Convolution"
  bottom: "norm2"
  top: "conv3"
  convolution_param 
    num_output: 64
    pad: 2
    kernel_size: 5
    stride: 1
  

layer 
  name: "relu3"
  type: "ReLU"
  bottom: "conv3"
  top: "conv3"

layer 
  name: "pool3"
  type: "Pooling"
  bottom: "conv3"
  top: "pool3"
  pooling_param 
    pool: AVE
    kernel_size: 3
    stride: 2
  

layer 
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool3"
  top: "ip1"
  param 
    lr_mult: 1
    decay_mult: 250
  
  param 
    lr_mult: 2
    decay_mult: 0
  
  inner_product_param 
    num_output: 10
  

layer 
  name: "prob"
  type: "Softmax"
  bottom: "ip1"
  top: "prob"

  该文件可以在ethereon网站上进行可视化。

4. caffe模型权重转pytorch模型权重

  具体支持的模型可参考链接:caffemodel2pytorch

# 运行指令
python -m caffemodel2pytorch resnet50-float.caffemodel -o resnet50.pt
# caffemodel2pytorch.py
import os
import sys
import time
import argparse
import tempfile
import subprocess
import collections
import torch
import torch.nn as nn
import torch.nn.functional as F
from functools import reduce

try:
	from urllib.request import urlopen
except:
	from urllib2 import urlopen # Python 2 support.

import google.protobuf.descriptor
import google.protobuf.descriptor_pool
import google.protobuf.symbol_database
import google.protobuf.text_format
from google.protobuf.descriptor import FieldDescriptor as FD

TRAIN = 0

TEST = 1

caffe_pb2 = None

def initialize(caffe_proto = 'https://raw.githubusercontent.com/BVLC/caffe/master/src/caffe/proto/caffe.proto', codegen_dir = tempfile.mkdtemp(), shadow_caffe = True):
	global caffe_pb2
	if caffe_pb2 is None:
		local_caffe_proto = os.path.join(codegen_dir, os.path.basename(caffe_proto))
		with open(local_caffe_proto, 'w') as f:
			mybytes = urlopen(caffe_proto).read()
			mystr = mybytes.decode('ascii', 'ignore')
			f.write(mystr)
			#f.write((urlopen if 'http' in caffe_proto else open)(caffe_proto).read())
		subprocess.check_call(['protoc', '--proto_path', os.path.dirname(local_caffe_proto), '--python_out', codegen_dir, local_caffe_proto])
		sys.path.insert(0, codegen_dir)
		old_pool = google.protobuf.descriptor._message.default_pool
		old_symdb = google.protobuf.symbol_database._DEFAULT
		google.protobuf.descriptor._message.default_pool = google.protobuf.descriptor_pool.DescriptorPool()
		google.protobuf.symbol_database._DEFAULT = google.protobuf.symbol_database.SymbolDatabase(pool = google.protobuf.descriptor._message.default_pool)
		import caffe_pb2 as caffe_pb2
		google.protobuf.descriptor._message.default_pool = old_pool
		google.protobuf.symbol_database._DEFAULT = old_symdb
		sys.modules[__name__ + '.proto'] = sys.modules[__name__]
		if shadow_caffe:
			sys.modules['caffe'] = sys.modules[__name__]
			sys.modules['caffe.proto'] = sys.modules[__name__]
	return caffe_pb2

def set_mode_gpu():
	global convert_to_gpu_if_enabled
	convert_to_gpu_if_enabled = lambda obj: obj.cuda()

def set_device(gpu_id):
	torch.cuda.set_device(gpu_id)

class Net(nn.Module):
	def __init__(self, prototxt, *args, **kwargs):
		super(Net, self).__init__()
		# to account for both constructors, see https://github.com/BVLC/caffe/blob/master/python/caffe/test/test_net.py#L145-L147
		caffe_proto = kwargs.pop('caffe_proto', None) 
		weights = kwargs.pop('weights', None)
		phase = kwargs.pop('phase', None)
		weights = weights or (args + (None, None))[0]
		phase = phase or (args + (None, None))[1]

		self.net_param = initialize(caffe_proto).NetParameter()
		google.protobuf.text_format.Parse(open(prototxt).read(), self.net_param)

		for layer in list(self.net_param.layer) + list(self.net_param.layers):
			layer_type = layer.type if layer.type != 'Python' else layer.python_param.layer
			if isinstance(layer_type, int):
				layer_type = layer.LayerType.Name(layer_type)
			module_constructor = ([v for k, v in modules.items() if k.replace('_', '').upper() in [layer_type.replace('_', '').upper(), layer.name.replace('_', '').upper()]] + [None])[0]
			if module_constructor is not None:
				param = to_dict(([v for f, v in layer.ListFields() if f.name.endswith('_param')] + [None])[0])
				caffe_input_variable_names = list(layer.bottom)
				caffe_output_variable_names = list(layer.top)
				caffe_loss_weight = (list(layer.loss_weight) or [1.0 if layer_type.upper().endswith('LOSS') else 0.0]) * len(layer.top)
				caffe_propagate_down = list(getattr(layer, 'propagate_down', [])) or [True] * len(caffe_input_variable_names)
				caffe_optimization_params = to_dict(layer.param)
				param['inplace'] = len(caffe_input_variable_names) == 1 and caffe_input_variable_names == caffe_output_variable_names
				module = module_constructor(param)
				self.add_module(layer.name, module if isinstance(module, nn.Module) else CaffePythonLayerModule(module, caffe_input_variable_names, caffe_output_variable_names, param.get('param_str', '')) if type(module).__name__.endswith('Layer') else FunctionModule(module))
				module = getattr(self, layer.name)
				module.caffe_layer_name = layer.name
				module.caffe_layer_type = layer_type
				module.caffe_input_variable_names = caffe_input_variable_names
				module.caffe_output_variable_names = caffe_output_variable_names
				module.caffe_loss_weight = caffe_loss_weight
				module.caffe_propagate_down = caffe_propagate_down
				module.caffe_optimization_params = caffe_optimization_params
				for optim_param, p in zip(caffe_optimization_params, module.parameters()):
					p.requires_grad = optim_param.get('lr_mult', 1) != 0
			else:
				print('Skipping layer [, , ]: not found in caffemodel2pytorch.modules dict'.format(layer.name, layer_type, layer.type))

		if weights is not None:
			self.copy_from(weights)

		self.blobs = collections.defaultdict(Blob)
		self.blob_loss_weights = name : loss_weight for module in self.children() for name, loss_weight in zip(module.caffe_output_variable_names, module.caffe_loss_weight)

		self.train(phase != TEST)
		convert_to_gpu_if_enabled(self)

	def forward(self, data = None, **variables):
		if data is not None:
			variables['data'] = data
		numpy = not all(map(torch.is_tensor, variables.values()))
		variables = k : convert_to_gpu_if_enabled(torch.from_numpy(v.copy()) if numpy else v) for k, v in variables.items()

		for module in [module for module in self.children() if not all(name in variables for name in module.caffe_output_variable_names)]:
			for name in module.caffe_input_variable_names:
				assert name in variables, 'Variable [] does not exist. Pass it as a keyword argument or provide a layer which produces it.'.format(name)
			inputs = [variables[name] if propagate_down else variables[name].detach() for name, propagate_down in zip(module.caffe_input_variable_names, module.caffe_propagate_down)]
			outputs = module(*inputs)
			if not isinstance(outputs, tuple):
				outputs = (outputs, )
			variables.update(dict(zip(module.caffe_output_variable_names, outputs)))

		self.blobs.update(k : Blob(data = v, numpy = numpy) for k, v in variables.items())
		caffe_output_variable_names = set([name for module in self.children() for name in module.caffe_output_variable_names]) - set([name for module in self.children() for name in module.caffe_input_variable_names if name not in module.caffe_output_variable_names])
		return k : v.detach().cpu().numpy() if numpy else v for k, v in variables.items() if k in caffe_output_variable_names

	def copy_from(self, weights):
		try:
			import h5py, numpy
			state_dict = self.state_dict()
			for k, v in h5py.File(weights, 'r').items():
				if k in state_dict:
					state_dict[k].resize_(v.shape).copy_(torch.from_numpy(numpy.array(v)))
			print('caffemodel2pytorch: loaded model from [] in HDF5 format'极智AI | caffe proto 校验模型结构 prototxt 讲解

Caffe中Solver.prototxt解析

如何将 caffe prototxt 转换为 pytorch 模型?

Caffe详解Caffe的lenet_solver.prototxt

caffe 配置文件详解

浅谈caffe中train_val.prototxt和deploy.prototxt文件的区别