trunc_normal_函数的含义和作用。
Posted AI浩
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了trunc_normal_函数的含义和作用。相关的知识,希望对你有一定的参考价值。
trunc_normal_函数:截断正太分布。
截断正态分布是截断分布(Truncated Distribution)的一种,那么截断分布是什么?截断分布是指,限制变量xx 取值范围(scope)的一种分布。如下图:
将正态分布的变量范围限制在【
u
−
3
δ
,
u
+
3
δ
u-3\\delta,u+3\\delta
u−3δ,u+3δ】内,那么我们就说我们截断了正态分布。
pytorch代码:
def _no_grad_trunc_normal_(tensor, mean, std, a, b):
# Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
def norm_cdf(x):
# Computes standard normal cumulative distribution function
return (1. + math.erf(x / math.sqrt(2.))) / 2.
if (mean < a - 2 * std) or (mean > b + 2 * std):
warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. "
"The distribution of values may be incorrect.",
stacklevel=2)
with torch.no_grad():
# Values are generated by using a truncated uniform distribution and
# then using the inverse CDF for the normal distribution.
# Get upper and lower cdf values
l = norm_cdf((a - mean) / std)
u = norm_cdf((b - mean) / std)
# Uniformly fill tensor with values from [l, u], then translate to
# [2l-1, 2u-1].
tensor.uniform_(2 * l - 1, 2 * u - 1)
# Use inverse cdf transform for normal distribution to get truncated
# standard normal
tensor.erfinv_()
# Transform to proper mean, std
tensor.mul_(std * math.sqrt(2.))
tensor.add_(mean)
# Clamp to ensure it's in the proper range
tensor.clamp_(min=a, max=b)
return tensor
def trunc_normal_(tensor: Tensor, mean: float = 0., std: float = 1., a: float = -2., b: float = 2.) -> Tensor:
r"""Fills the input Tensor with values drawn from a truncated
normal distribution. The values are effectively drawn from the
normal distribution :math:`\\mathcalN(\\textmean, \\textstd^2)`
with values outside :math:`[a, b]` redrawn until they are within
the bounds. The method used for generating the random values works
best when :math:`a \\leq \\textmean \\leq b`.
Args:
tensor: an n-dimensional `torch.Tensor`
mean: the mean of the normal distribution
std: the standard deviation of the normal distribution
a: the minimum cutoff value
b: the maximum cutoff value
Examples:
>>> w = torch.empty(3, 5)
>>> nn.init.trunc_normal_(w)
"""
return _no_grad_trunc_normal_(tensor, mean, std, a, b)
何时使用,比如我们使用ImageNet的预训练权重,修改了类别,这时候就要对其参数做初始化,调用方法:
model_ft = convvit_base_patch16()
model_ft.load_state_dict(torch.load('checkpoint.pth'),strict=False)
numftr=model_ft.head.in_features
model_ft.head=torch.nn.Linear(numftr,classes)
nn.init.trunc_normal_(model_ft.head.weight, std=2e-5)# 将参数初始化为整态分布
timm也提供了trunc_normal_,导入方法:
from timm.models.layers import trunc_normal_
调用方法同上。
开发者涨薪指南 48位大咖的思考法则、工作方式、逻辑体系以上是关于trunc_normal_函数的含义和作用。的主要内容,如果未能解决你的问题,请参考以下文章