PyTorch笔记 - MAE: Masked Autoencoders Are Scalable Vision Learners
Posted SpikeKing
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了PyTorch笔记 - MAE: Masked Autoencoders Are Scalable Vision Learners相关的知识,希望对你有一定的参考价值。
欢迎关注我的CSDN:https://blog.csdn.net/caroline_wendy
本文地址:https://blog.csdn.net/caroline_wendy/article/details/128345741
Paper:MAE - Masked Autoencoders Are Scalable Vision Learners
-
掩码的自编码器是可扩展的视觉学习器
-
Kaiming He,FAIR
Code:https://github.com/facebookresearch/mae
self-supervised learning,自监督学习算法
Abstract摘要:写作意图、算法描述、最优效果
MAE,Masked AutoEncoders,类似于BERT,asymmetric encoder-decoder architecture,非对称的编码器-解码器架构。
A nontrivial(重要的) and meaningful(有意义的) self-supervisory task,有效的训练大模型。
MAE的结构:
以上是关于PyTorch笔记 - MAE: Masked Autoencoders Are Scalable Vision Learners的主要内容,如果未能解决你的问题,请参考以下文章
PyTorch笔记 - MAE(Masked Autoencoders) PyTorch源码
PyTorch笔记 - MAE: Masked Autoencoders Are Scalable Vision Learners
PyTorch笔记 - MAE: Masked Autoencoders Are Scalable Vision Learners
PyTorch笔记 - MAE: Masked Autoencoders Are Scalable Vision Learners