总的来说,基于忆阻器的神经网络应用是神经形态计算不可或缺的研究方向。忆阻器以自身的突触可塑性、低功耗、高效率、可集成等优势在神经网络应用中被大量研究,实现了多样化的忆阻神经网络,包括多层感知机、卷积神经网络、长短期记忆网络等。忆阻器主要被用作神经网络中的突触器件,其在脉冲下的多值调控特性完美地实现突触权重硬件化映射,能够存储突触权重矩阵并实现原位计算。忆阻器交叉阵列的并行矩阵乘加运算能力实现了神经网络计算的加速神经网络计算。因此,基于忆阻器的神经网络是其硬件化的有效实现方案,为构建存算一体化的新型计算架构提供解决办法。 但在神经网络的硬件化实现方面,仍然存在许多亟待解决的问题,以及需要深入思考的困惑。主要包括以下几个方面: (1)忆阻器件的突触特性仍需进一步提高,如忆阻器的多值电导调控特性,在当前研究中只考虑了脉冲连续施加情况下电导的连续改变,但对于实际应用来说,稳定且非易失的每一个电导状态是必要的。因此,如何定义忆阻器的电导连续变化过程中的多值,以及如何从器件工程层面改善器件的稳定多值特性,是目前忆阻突触器件在实际硬件实现过程中的一大难题。 (2)忆阻器的导电丝阻变机理造成了器件本征的不可消除的噪声问题。由于导电丝在形成和断裂过程中都不可能完全受外部施加的激励所控制,其随机性不可消除,因此使得忆阻器的本征噪声问题无法解决。而在神经网络应用中,这一本征噪声问题的影响是否会对系统造成损耗还不可获知,需要更进一步的研究和实验验证。 (3)在当前的研究成果中,忆阻阵列具有传统CMOS逻辑电路不可比拟的并行计算能力和存储与计算相融合的特点,但忆阻突触器件的模拟计算特性与外围CMOS数字电路无法完全兼容,数模/模数转换成为忆阻阵列与CMOS集成的电路设计的难点。过于复杂的外围电路会提高系统的整体功耗,与忆阻阵列的引入初衷相悖。 参考文献:[1] MOORE G. Moore’s law[J]. Cramming more components onto integrated circuits[J]. Electronics, 1965, 38(8): 114-117. [2] HOLT W M. 1.1 Moore’s law: a path going forward[C]//2016 IEEE International Solid-State Circuits Conference(ISSCC). IEEE, 2016: 8-13. [3] JONES V F R. A polynomial invariant for knots via vonNeumann algebras[M]. New Developments in The Theory of Knots, 1985. [4] BACKUS J W. Can programming be liberated from thevon Neumann style? A functional style and its algebra ofprograms[J]. Communications of the ACM, 1978, 21(8):613-641. [5] CHUA L. Memristor-the missing circuit element[J].IEEE Transactions on Circuit Theory, 1971, 18(5): 507-519. [6] STRUKOV D B, SNIDER G S, STEWART D R, et al.The missing memristor found[J]. Nature, 2008, 453(7191): 80-83. [7] JO S H, CHANG T, EBONG I, et al. Nanoscale mersister device as synapse in neuromorphic systems[J]. NanoLetters, 2010, 10(4): 1297-1301. [8] CHEN J, LIN C Y, LI Y, et al. LiSiOx- based analogmemristive synapse for neuromorphic computing[J].IEEE Electron Device Letters, 2019, 40(4): 542-545. [9] AMBROGIO S, NARAYANAN P, TASI H, et al. Equivalent- accuracy accelerated neural- network training usinganalogue memory[J]. Nature, 2018, 558(7708): 60-67. [10] JERRY M, CHEN P Y, ZHANG J, et al. FerroelectricFET analog synapse for acceleration of deep neural network training[C]// 2017 IEEE International Electron Devices Meeting (IEDM). IEEE, 2017: 6.2.1-6.2.4. [11] WU M H, HONG M C, CHANG C C, et al. Extremelycompact integrate-and-fire STT-MRAM neuron: A pathway toward all- spin artificial deep neural network[C]//2019 Symposium on VLSI Technology. IEEE, 2019: T34-T35. [12] LI Y, ZHONG Y, ZHANG J, et al. Activity- dependentsynaptic plasticity of a chalcogenide electronic synapsefor neuromorphic systems[J]. Scientific Reports, 2014, 4 (6184): 4906. [13] CHEN P Y, PENG X, YU S. NeuroSim: A circuit-levelmacro model for benchmarking neuro-inspired architectures in online learning[J]. IEEE Transactions on Computer- Aided Design of Integrated Circuits and Systems,2018, 37(12): 3067-3080. [14] DEGUCHI Y, MAEDA K, SUZUKI S, et al. Error-reduction controller techniques of TaOx- based ReRAM fordeep neural networks to extend data- retention lifetimeby over 1700x[C]// 2018 IEEE International MemoryWorkshop (IMW). IEEE, 2018: 1-4. [15] DIEHL P U, COOK M. Unsupervised learning of digitrecognition using spike- timing- dependent plasticity[J].Frontiers in Computational Neuroscience, 2015, 9(429):99. [16] GOKMEN T, ONEN M, WILFRIED H. Training deepconvolutional neural networks with resistive cross- pointdevices[J]. Frontiers in Neuroscience, 2017, 11: 538. [17] GOKMEN T, VLASOV Y. Acceleration of deep neuralnetwork training with resistive cross- point devices: design considerations[J]. Frontiers in Neuroscience, 2016,10(51): 333. [18] ESSER S K, MEROLLA P A, ARTHUR J V, et al. Convolutional networks for fast energy- efficient neuromorphic computing[J]. Proceedings of the National Academy of Sciences, 2016, 113(41): 11441-11446.[19] CHEN L, LI J, CHEN Y, et al. Accelerator-friendly neural- network training: learning variations and defects inRRAM crossbar[C]// 2017 Design, Automation & Testin Europe Conference & Exhibition (DATE). IEEE,2017: 19-24. [20] CHANG C C, LIU J C, SHEN Y L, et al. Challengesand opportunities toward online training acceleration using RRAM- based hardware neural network[C]// 2017IEEE International Electron Devices Meeting (IEDM).IEEE, 2017: 11.6.1-11.6.4. [21] TRUONG S N, MIN K S. New memristor-based crossbar array architecture with 50-% area reduction and48-% power saving for matrix- vector multiplication ofanalog neuromorphic computing[J]. Journal of Semiconductor Technology and Science, 2014, 14(3): 356-363. [22] CHOI S, SHIN J H, LEE J, et al. Experimental demonstration of feature extraction and dimensionality reduction using memristor networks[J]. Nano Letters, 2017, 17(5): 3113-3118. [23] HU M, GRAVES C E, LI C, et al. Memristor‐based analog computation and neural network classification with adot product engine[J]. Advanced Materials, 2018, 30(9):1705914. [24] NURSE E, MASHFORD B S, YEPES A J, et al. Decoding EEG and LFP signals using deep learning: headingTrueNorth[C]// Proceedings of the ACM InternationalConference on Computing Frontiers. ACM, 2016: 259-266. [25] LUO T, LIU S, LI L, et al. Dadiannao: A neural networksupercomputer[J]. IEEE Transactions on Computers,2016, 66(1): 73-88. [26] PEI J, DENG L, SONG S, et al. Towards artificial general intelligence with hybrid Tianjic chip architecture[J].Nature, 2019, 572: 106-111. [27] CAI F, CORRELL J, LEE S H, et al. A fully integratedreprogrammable memristor – CMOS system for efficientmultiply – accumulate operations[J]. Nature Electronics,2019, 2(7): 290-299. [28] CHEN W H, DOU C, LI K X, et al. CMOS-integratedmemristive non- volatile computing- in- memory for AIedge processors[J]. Nature Electronics, 2019, 2: 1-9. [29] MCCULLOCH W S, PITTS W. A logical calculus of theideas immanent in nervous activity[J]. The Bulletin ofMathematical Biophysics, 1943, 5(4): 115-133. [30] HEBB D O. The organization of behavior: a neuropsychological theory[J]. American Journal of Physical Medicine & Rehabilitation, 2013, 30(1): 74-76. [31] ROSENBLATT F. The perceptron: a probabilistic modelfor information storage and organization in the brain[J].Psychological Review, 1958, 65(6): 386-408. [32] WANG Z R, JOSHI S, SAVEL’EV S, et al. Fully memristive neural networks for pattern classification with unsupervised learning[J]. Nature Electronics, 2018, 1(2):137-145. [33] CAI F, CORRELL J, LEE S H, et al. A fully integratedreprogrammable memristor – CMOS system for efficientmultiply – accumulate operations[J]. Nature Electronics,2019, 2(7): 290-299. [34] IELMINI D, AMBROGIO S, MILO V, et al. Neuromorphic computing with hybrid memristive/CMOS synapsesfor real-time learning[C]// 2016 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2016:1386-1389. [35] CHEN P Y, YU S. Partition SRAM and RRAM basedsynaptic arrays for neuro- inspired computing[C]// 2016IEEE International Symposium on Circuits and Systems(ISCAS). IEEE, 2016: 2310-2313. [36] KIM S G, HAN J S, KIM H, et al. Recent advances inmemristive materials for artificial synapses[J]. AdvancedMaterials Technologies, 2018, 3(12): 1800457. [37] TSAI H, AMBROGIO S, NARAYANAN P, et al. Recentprogress in analog memory- based accelerators for deeplearning[J]. Journal of Physics D: Applied Physics,2018, 51(28): 283001. [38] SUNG C, HWANG H, YOO I K. Perspective: a reviewon memristive hardware for neuromorphic computation[J]. Journal of Applied Physics, 2018, 124(15): 151903. [39] CRISTIANO G, GIORDANO M, AMBROGIO S, et al.Perspective on training fully connected networks with resistive memories: device requirements for multiple conductances of varying significance[J]. Journal of AppliedPhysics, 2018, 124(15): 151901. [40] LI C, BELKIN D, LI Y, et al. Efficient and self-adaptivein-situ learning in multilayer memristor neural networks[J]. Nature Communications, 2018, 9(1): 2385. [41] YAO P, WU H, GAO B, et al. Face classification usingelectronic synapses[J]. Nature Communications, 2017, 8:15199. [42] BAYAT F M, PREZiosO M, CHAKRABARTI B, et al.Implementation of multilayer perceptron network withhighly uniform passive memristive crossbar circuits[J].Nature Communications, 2018, 9(1): 2331. [43] KWAK M, PARK J, WOO J, et al. Implementation ofconvolutional kernel function using 3- D TiOx resistiveswitching devices for image processing[J]. IEEE Transactions on Electron Devices, 2018, 65(10): 4716-4718. [44] YAKOPCIC C, ALOM M Z, TAHA T M. Memristorcrossbar deep network implementation based on a convolutional neural network[C]// 2016 International JointConference on Neural Networks (IJCNN). IEEE, 2016:963-970. [45] GARBIN D, VIANELLO E, BICHLER O, et al. HfO2-based OxRAM devices as synapses for convolutionalneural networks[J]. IEEE Transactions on Electron Devices, 2015, 62(8): 2494-2501. [46] GOKMEN T, ONEN M, HAENSCH W, et al. Trainingdeep convolutional neural networks with resistive crosspoint devices[J]. Frontiers in Neuroscience, 2017, 11:538. [47] ZHOU Z, HUANG P, XIANG Y C, et al. A new hardware implementation approach of BNNs based on nonlinear 2T2R synaptic cell[C]// 2018 IEEE InternationalElectron Devices Meeting (IEDM). IEEE, 2018: 20.7.1-20.7.4. [48] SUN X, YIN S, PENG X, et al. XNOR-RRAM: a scalable and parallel resistive synaptic architecture for binary neural networks[C]// 2018 Design, Automation &Test in Europe Conference & Exhibition (DATE). IEEE,2018: 1423-1428. [49] LI C, WANG Z, RAO M, et al. Long short-term memory networks in memristor crossbar arrays[J]. Nature Machine Intelligence, 2019, 1(1): 49-57. [50] TSAI H, AMBROGIO S, MACKIN C, et al. Inferenceof long-short term memory networks at software-equivalent accuracy using 2.5M analog phase change memorydevices[C]// 2019 Symposium on VLSI Technology.IEEE, 2019: T82-T83. 文献引用:陈佳,潘文谦,秦一凡,等. 基于忆阻器的神经网络应用研究[J]. 微纳电子与智能制造, 2019, 1(4): 24-38.CHEN Jia, PANWenqian, QIN Yifan, et al. Research of neural network based on memristor[J]. Micro/nano Electronics and Intelligent Manufacturing, 2019, 1(4): 24-38.《微纳电子与智能制造》刊号:CN10-1594/TN主管单位:北京电子控股有限责任公司主办单位:北京市电子科技科技情报研究所 北京方略信息科技有限公司