Posted 澶ф暟鎹寲鎺楧T鏁版嵁鍒嗘瀽

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了相关的知识,希望对你有一定的参考价值。

 鍚慉I杞瀷鐨勭▼搴忓憳閮藉叧娉ㄤ簡杩欎釜鍙?/span>馃憞馃憞馃憞



璁稿寮€鍙戣€呭悜鏂版墜寤鸿锛氬鏋滀綘鎯宠鍏ラ棬鏈哄櫒瀛︿範锛屽氨蹇呴』鍏堜簡瑙d竴浜涘叧閿畻娉曠殑宸ヤ綔鍘熺悊锛岀劧鍚庡啀寮€濮嬪姩鎵嬪疄璺点€備絾鎴戜笉杩欎箞璁や负銆?/p>


鎴戣寰楀疄璺甸珮浜庣悊璁猴紝鏂版墜棣栧厛瑕佸仛鐨勬槸浜嗚В鏁翠釜妯″瀷鐨勫伐浣滄祦绋嬶紝鏁版嵁澶ц嚧鏄€庢牱娴佸姩鐨勶紝缁忚繃浜嗗摢浜涘叧閿殑缁撶偣锛屾渶鍚庣殑缁撴灉鍦ㄥ摢閲岃幏鍙栵紝骞剁珛鍗冲紑濮嬪姩鎵嬪疄璺碉紝鏋勫缓鑷繁鐨勬満鍣ㄥ涔犳ā鍨嬨€傝嚦浜庣畻娉曞拰鍑芥暟鍐呴儴鐨勫疄鐜版満鍒讹紝鍙互绛変簡瑙f暣涓祦绋嬩箣鍚庯紝鍦ㄥ疄璺典腑杩涜鏇存繁鍏ョ殑瀛︿範鍜屾帉鎻°€?/p>



鍦ㄦ湰鏂囦腑锛屾垜浠皢鍒╃敤 TensorFlow 瀹炵幇涓€涓熀浜庢繁搴︾缁忕綉缁滐紙DNN锛夌殑鏂囨湰鍒嗙被妯″瀷锛屽笇鏈涘鍚勪綅鍒濆鑰?/strong>鏈夋墍甯姪銆傞渶



涓嬮潰鏄寮忕殑鏁欑▼鍐呭锛?/p>


  鍏充簬 TensorFlow

TensorFlow 鏄胺姝屾棗涓嬩竴涓紑婧愮殑鏈哄櫒瀛︿範妗嗘灦銆備粠瀹冪殑鍚嶅瓧灏辫兘鐪嬪嚭杩欎釜妗嗘灦鍩烘湰鐨勫伐浣滃師鐞嗭細鐢卞缁存暟缁勬瀯鎴愮殑寮犻噺锛坱ensor锛夊湪鍥撅紙graph锛夌粨鐐逛箣闂村畾鍚戞祦鍔紙flow锛夛紝浠庤緭鍏ヨ蛋鍒拌緭鍑恒€?/p>


鍦?TensorFlow 涓紝姣忔杩愮畻閮藉彲浠ョ敤鏁版嵁娴佸浘锛坉ataflow graph锛夌殑鏂瑰紡琛ㄧず銆傛瘡涓暟鎹祦鍥鹃兘鏈変互涓嬩袱涓噸瑕佸厓绱狅細


鈼?涓€缁?tf.Operation 瀵硅薄锛屼唬琛ㄨ繍绠楁湰韬紱

鈼?涓€缁?tf.Tensor 瀵硅薄锛屼唬琛ㄨ杩愮畻鐨勬暟鎹€?/p>

濡備笅鍥炬墍绀猴紝杩欓噷鎴戜滑浠ヤ竴涓畝鍗曠殑渚嬪瓙璇存槑鏁版嵁娴佸浘鍏蜂綋鏄€庢牱杩愯鐨勩€?/p>



璇︾粏浠嬬粛tensorflow 绁炵粡缃戠粶鍒嗙被妯″瀷鏋勫缓鍏ㄨ繃绋嬶細浠ユ枃鏈垎绫讳负渚?></p> 
<p><br></p> 
<p>鍋囪鍥句腑鐨?x=[1,3,6]锛寉=[1,1,1]銆傜敱浜?tf.Tensor 琚敤鏉ヨ〃绀鸿繍绠楁暟鎹紝鍥犳鍦?TensorFlow 涓垜浠細棣栧厛瀹氫箟涓や釜 tf.Tensor 甯搁噺瀵硅薄瀛樻斁鏁版嵁銆傜劧鍚庡啀鐢?tf.Operation 瀵硅薄瀹氫箟鍥句腑鐨勫姞娉曡繍绠楋紝鍏蜂綋浠g爜濡備笅锛?/p> 
<p><br></p> 
<blockquote> 
 <p>import tensorflow as tf</p> 
 <p><span>x = tf.constant([1,3,6]) </strong></p> 
 <p><span>y = tf.constant([1,1,1])</strong></p> 
 <p>op = tf.add(x,y)</p> 
</blockquote> 
<p><br></p> 
<p>鐜板湪锛屾垜浠凡缁忓畾涔変簡鏁版嵁娴佸浘鐨勪袱涓噸瑕佸厓绱狅細tf.Operation 鍜?tf.Tensor锛岄偅涔堝浣曟瀯寤哄浘鏈韩鍛紝鍏蜂綋浠g爜濡備笅锛?/p> 
<blockquote> 
 <p>import tensorflow as tf</p> 
 <p>my_graph = tf.Graph()</p> 
 <p><br></p> 
 <p>with my_graph.as_default():</p> 
 <p>    x = tf.constant([1,3,6]) </p> 
 <p>    y = tf.constant([1,1,1])</p> 
 <p>   <span> op = tf.add(x,y)</strong></p> 
</blockquote> 
<p><br></p> 
<p>鑷虫鎴戜滑宸茬粡瀹屾垚浜嗘暟鎹祦鍥剧殑瀹氫箟锛屽湪 TensorFlow 涓紝鍙湁鍏堝畾涔変簡鍥撅紝鎵嶈兘杩涜鍚庣画鐨勮绠楁搷浣滐紙鍗抽┍鍔ㄦ暟鎹湪鍥剧殑缁撶偣闂村畾鍚戞祦鍔級銆傝繖閲?TensorFlow 鍙堣瀹氾紝瑕佽繘琛屽悗缁殑璁$畻锛屽繀椤婚€氳繃 tf.Session 鏉ョ粺涓€绠$悊锛屽洜姝ゆ垜浠繕瑕佸畾涔変竴涓?tf.Session 瀵硅薄锛屽嵆浼氳瘽銆?/p> 
<p><br></p> 
<p>鍦?TensorFlow 涓紝tf.Session 涓撻棬鐢ㄦ潵灏佽 tf.Operation 鍦?tf.Tensor 鍩虹涓婃墽琛岀殑鎿嶄綔鐜銆傚洜姝わ紝鍦ㄥ畾涔?tf.Session 瀵硅薄鏃讹紝涔熼渶瑕佷紶鍏ョ浉搴旂殑鏁版嵁娴佸浘锛堝彲浠ラ€氳繃 graph 鍙傛暟浼犲叆锛夛紝鏈緥涓叿浣撶殑浠g爜濡備笅锛?/p> 
<p><br></p> 
<blockquote> 
 <p>import tensorflow as tf</p> 
 <p>my_graph = tf.Graph()</p> 
 <p><span>with tf.Session(graph=my_graph) as sess:</strong></p> 
 <p>    x = tf.constant([1,3,6]) </p> 
 <p>    y = tf.constant([1,1,1])</p> 
 <p>    op = tf.add(x,y)</p> 
</blockquote> 
<p><br></p> 
<p>瀹氫箟濂?tf.Session 涔嬪悗锛屾垜浠彲浠ラ€氳繃 tf.Session.run() 鏂规硶鏉ユ墽琛屽搴旂殑鏁版嵁娴佸浘銆俽un() 鏂规硶鍙互閫氳繃 fetches 鍙傛暟浼犲叆鐩稿簲 tf.Operation 瀵硅薄锛屽苟瀵煎叆涓?tf.Operation 鐩稿叧鐨勬墍鏈?tf.Tensor 瀵硅薄锛岀劧鍚庨€掑綊鎵ц涓庡綋鍓?tf.Operation 鏈変緷璧栧叧绯荤殑鎵€鏈夋搷浣溿€傛湰渚嬩腑鍏蜂綋鎵ц鐨勬槸姹傚拰鎿嶄綔锛屽疄鐜颁唬鐮佸涓嬶細</p> 
<p><br></p> 
<blockquote> 
 <p>import tensorflow as tf</p> 
 <p>my_graph = tf.Graph()</p> 
 <p>with tf.Session(graph=my_graph) as sess:</p> 
 <p>    x = tf.constant([1,3,6]) </p> 
 <p>    y = tf.constant([1,1,1])</p> 
 <p>    op = tf.add(x,y)</p> 
 <p>    <span>result = sess.run(fetches=op)</strong></p> 
 <p>    print(result)</p> 
 <p>>>> [2 4 7]</p> 
</blockquote> 
<p><br></p> 
<p>鍙互鐪嬪埌杩愮畻缁撴灉鏄?[2 4 7]銆?/p> 
<p><br></p> 
<p><br></p> 
<h2><span class=  鍏充簬棰勬祴妯″瀷

浜嗚В TensorFlow 鐨勫熀鏈師鐞嗕箣鍚庯紝涓嬮潰鐨勪换鍔℃槸濡備綍鏋勫缓涓€涓娴嬫ā鍨嬨€傜畝鍗曟潵璇达紝鏈哄櫒瀛︿範绠楁硶 + 鏁版嵁灏辩瓑浜庨娴嬫ā鍨嬨€傛瀯寤洪娴嬫ā鍨嬬殑娴佺▼濡備笅鍥炬墍绀猴細

璇︾粏浠嬬粛tensorflow 绁炵粡缃戠粶鍒嗙被妯″瀷鏋勫缓鍏ㄨ繃绋嬶細浠ユ枃鏈垎绫讳负渚?></p> 
<p><br></p> 
<p>濡傚浘锛岀粡杩囨暟鎹缁冪殑鏈哄櫒瀛︿範绠楁硶灏辨槸妯″瀷銆傝缁冨ソ涓€涓ā鍨嬩箣鍚庯紝杈撳叆寰呴娴嬫暟鎹紝灏辫兘寰楀埌鐩稿簲鐨勯娴嬬粨鏋溿€傚ぇ浣撴祦绋嬪涓嬪浘鎵€绀猴細</p> 
<p><br></p> 
<p class=璇︾粏浠嬬粛tensorflow 绁炵粡缃戠粶鍒嗙被妯″瀷鏋勫缓鍏ㄨ繃绋嬶細浠ユ枃鏈垎绫讳负渚?></p> 
<p><br></p> 
<p>鍦ㄦ湰渚嬩腑锛屾垜浠皢瑕佹瀯寤虹殑妯″瀷闇€瑕佹牴鎹緭鍏ユ枃鏈紝杈撳嚭鐩稿簲鐨勭被鍒紝鍗冲畬鎴愭枃鏈垎绫荤殑宸ヤ綔銆傚洜姝よ繖閲岀殑杈撳叆搴旇鏄枃鏈紙text锛夛紝杈撳嚭鏄被鍒紙category锛夈€傛洿鍏蜂綋鍦拌锛屾湰渚嬩腑鎴戜滑宸茬粡浜嬪厛鑾峰彇浜嗘爣璁版暟鎹紙鍗充竴浜涘凡缁忔爣鏄庝簡绫诲埆鐨勬枃鏈锛夛紝鐒跺悗鐢ㄨ繖浜涙暟鎹绠楁硶杩涜璁粌锛屾渶鍚庡啀鐢ㄨ缁冨ソ鐨勬ā鍨嬪鏂版枃鏈垎绫汇€傝繖涓€杩囩▼涔熷氨鏄€氬父鎵€璇寸殑鐩戠潱瀛︿範锛坰upervised learning锛夈€傚彟澶栵紝鐢变簬鎴戜滑鐨勪换鍔℃槸瀵规枃鏈暟鎹繘琛屽垎绫伙紝鎵€浠ヤ篃灞炰簬鍒嗙被闂鐨勮寖鐣淬€?/p> 
<p><br></p> 
<p><br></p> 
<p>涓轰簡鏋勫缓璇ユ枃鏈垎绫绘ā鍨嬶紝涓嬮潰鎴戜滑闇€瑕佷粙缁嶄竴浜涚缁忕綉缁滅殑鍩虹鐭ヨ瘑銆?/p> 
<p><br></p> 
<h2><span class=  鍏充簬绁炵粡缃戠粶

浠庢湰璐ㄤ笂璇达紝绁炵粡缃戠粶鏄绠楁ā鍨嬶紙computational model锛夌殑涓€绉嶃€傦紙娉細杩欓噷鎵€璋撹绠楁ā鍨嬫槸鎸囬€氳繃鏁板璇█鍜屾暟瀛︽蹇垫弿杩扮郴缁熺殑鏂规硶锛夊苟涓旇繖绉嶈绠楁ā鍨嬭繕鑳藉鑷姩瀹屾垚瀛︿範鍜岃缁冿紝涓嶉渶瑕佺簿纭紪绋嬨€?/p>


鏈€鍘熷涔熸槸鏈€鍩虹鐨勪竴涓缁忕綉缁滅畻娉曟ā鍨嬫槸鎰熺煡鏈烘ā鍨嬶紙Perceptron锛夛紝鍏充簬鎰熺煡鏈烘ā鍨嬬殑璇︾粏浠嬬粛璇峰弬瑙佽繖绡囧崥瀹細

http://t.cn/R5MphRp 



鐢变簬绁炵粡缃戠粶妯″瀷鏄ā鎷熶汉绫诲ぇ鑴戠缁忕郴缁熺殑缁勭粐缁撴瀯鑰屾彁鍑虹殑锛屽洜姝ゅ畠涓庝汉绫荤殑鑴戠缁忕綉缁滃叿鏈夌浉浼肩殑缁撴瀯銆?/p>

璇︾粏浠嬬粛tensorflow 绁炵粡缃戠粶鍒嗙被妯″瀷鏋勫缓鍏ㄨ繃绋嬶細浠ユ枃鏈垎绫讳负渚?></p> 
<p>濡備笂鍥炬墍绀猴紝涓€鑸殑绁炵粡缃戠粶缁撴瀯鍙互鍒嗕负涓夊眰锛氳緭鍏ュ眰銆侀殣钄藉眰锛坔idden layer锛夊拰杈撳嚭灞傘€?/p> 
<p><br></p> 
<p>涓轰簡娣卞叆鐞嗚В绁炵粡缃戠粶绌剁珶鏄浣曞伐浣滅殑锛屾垜浠渶瑕佸埄鐢?TensorFlow 鑷繁浜叉墜鏋勫缓涓€涓缁忕綉缁滄ā鍨嬶紝涓嬮潰浠嬬粛涓€涓叿浣撶殑瀹炰緥銆?/p> 
<p><br></p> 
<p>鏈緥涓紝鎴戜滑鏈変袱涓殣钄藉眰锛堝叧浜庨殣钄藉眰灞傛暟鐨勯€夋嫨鏄彟涓€涓棶棰橈紝璇︾粏鍐呭https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw</p> 
<p>锛夈€傛鎷湴璇达紝闅愯斀灞傜殑涓昏浣滅敤鏄皢杈撳叆灞傜殑鏁版嵁杞崲鎴愪竴绉嶈緭鍑哄眰鏇翠究浜庡埄鐢ㄧ殑褰㈠紡銆?/p> 
<p><br></p> 
<p class=璇︾粏浠嬬粛tensorflow 绁炵粡缃戠粶鍒嗙被妯″瀷鏋勫缓鍏ㄨ繃绋嬶細浠ユ枃鏈垎绫讳负渚?></p> 
<p>濡傚浘鎵€绀猴紝鏈緥涓緭鍏ュ眰鐨勬瘡涓粨鐐归兘浠h〃浜嗚緭鍏ユ枃鏈腑鐨勪竴涓瘝锛屾帴涓嬫潵鏄涓€涓殣钄藉眰銆傝繖閲岄渶瑕佹敞鎰忕殑鏄紝绗竴灞傞殣钄藉眰鐨勭粨鐐逛釜鏁伴€夋嫨涔熸槸涓€椤归噸瑕佺殑浠诲姟锛岄€氬父琚О涓虹壒寰侀€夋嫨銆?/p> 
<p><br></p> 
<p>鍥句腑鐨勬瘡涓粨鐐癸紙涔熻绉颁负绁炵粡鍏冿級锛岄兘浼氭惌閰嶄竴涓潈閲嶃€傝€屾垜浠笅闈㈡墍璋撹缁冭繃绋嬪叾瀹炲氨鏄笉鏂皟鏁磋繖浜涙潈閲嶅€硷紝璁╂ā鍨嬬殑瀹為檯杈撳嚭鍜岄鎯宠緭鍑烘洿鍖归厤鐨勮繃绋嬨€傚綋鐒讹紝闄や簡鏉冮噸涔嬪锛屾暣涓綉缁滆繕瑕佸姞涓婁竴涓亸宸€笺€?/p> 
<p><br></p> 
<p>瀵规瘡涓粨鐐瑰仛鍔犳潈鍜屽苟鍔犱笂涓€涓亸宸€间箣鍚庯紝杩橀渶瑕佺粡杩囨縺娲诲嚱鏁帮紙activation function锛夌殑澶勭悊鎵嶈兘杈撳嚭鍒颁笅涓€灞?銆?/p> 
<p><br></p> 
<p><br></p> 
<p class=璇︾粏浠嬬粛tensorflow 绁炵粡缃戠粶鍒嗙被妯″瀷鏋勫缓鍏ㄨ繃绋嬶細浠ユ枃鏈垎绫讳负渚?></p> 
<p class=x涓轰竴涓缁忓厓鐨勫€硷紝W涓烘潈閲嶏紝b涓哄亸宸€硷紝softmax()涓烘縺娲诲嚱鏁帮紝a鍗充负杈撳嚭鍊笺€?br>


瀹為檯涓婏紝杩欓噷婵€娲诲嚱鏁扮‘瀹氫簡姣忎釜缁撶偣鐨勬渶缁堣緭鍑烘儏鍐碉紝鍚屾椂涓烘暣涓ā鍨嬪姞鍏ヤ簡闈炵嚎鎬у厓绱犮€傚鏋滅敤鍙扮伅鏉ュ仛姣斿柣鐨勮瘽锛屾縺娲诲嚱鏁扮殑浣滅敤灏辩浉褰撲簬寮€鍏炽€傚疄闄呯爺绌朵腑鏍规嵁搴旂敤鐨勫叿浣撳満鏅拰鐗圭偣锛屾湁鍚勭涓嶅悓鐨勬縺娲诲嚱鏁板彲渚涢€夋嫨锛岃繖閲屽睆钄藉眰閫夋嫨鐨勬槸 ReLu 鍑芥暟銆?br>

鍙﹀鍥句腑杩樻樉绀轰簡绗簩涓殣钄藉眰锛屽畠鐨勫姛鑳藉拰绗竴灞傚苟娌℃湁鏈川鍖哄埆锛屽敮涓€鐨勪笉鍚屽氨鏄畠鐨勮緭鍏ユ槸绗竴灞傜殑杈撳嚭锛岃€岀涓€灞傜殑杈撳叆鍒欐槸鍘熷鏁版嵁銆?/p>


鏈€鍚庢槸杈撳嚭灞傦紝鏈緥涓簲鐢ㄤ簡鐙儹缂栫爜鐨勬柟寮忔潵瀵圭粨鏋滆繘琛屽垎绫汇€傝繖閲屾墍璋撶嫭鐑紪鐮佹槸鎸囨瘡涓悜閲忎腑鍙湁涓€涓厓绱犳槸 1锛屽叾浠栧潎涓?0 鐨勭紪鐮佹柟寮忋€備緥濡傛垜浠灏嗘枃鏈暟鎹垎涓轰笁涓被鍒紙浣撹偛銆佽埅绌哄拰鐢佃剳缁樺浘锛夛紝鍒欑紪鐮佺粨鏋滀负锛?/p>


璇︾粏浠嬬粛tensorflow 绁炵粡缃戠粶鍒嗙被妯″瀷鏋勫缓鍏ㄨ繃绋嬶細浠ユ枃鏈垎绫讳负渚?></p> 
<p><br></p> 
<p>杩欓噷鐙儹缂栫爜鐨勫ソ澶勬槸锛氳緭鍑虹粨鐐圭殑涓暟鎭板ソ绛変簬杈撳嚭绫诲埆鐨勪釜鏁般€傛澶栵紝杈撳嚭灞傚拰鍓嶉潰鐨勯殣钄藉眰缁撴瀯绫讳技锛屾垜浠篃瑕佷负姣忎釜缁撶偣鎼厤涓€涓潈閲嶅€硷紝鍔犱笂鎭板綋鐨勫亸宸紝鏈€鍚庨€氳繃婵€娲诲嚱鏁扮殑澶勭悊銆?/p> 
<p><br></p> 
<p>浣嗘湰渚嬩腑杈撳嚭灞傜殑婵€娲诲嚱鏁颁笌闅愯斀灞傜殑婵€娲诲嚱鏁颁笉鍚屻€傜敱浜庢湰渚嬬殑鏈€缁堢洰鐨勬槸杈撳嚭姣忎釜鏂囨湰瀵瑰簲鐨勭被鍒俊鎭紝鑰岃繖閲屾墍鏈夌被鍒箣闂村張鏄簰鏂ョ殑鍏崇郴銆傚熀浜庤繖浜涚壒鐐癸紝鎴戜滑鍦ㄨ緭鍑哄眰閫夋嫨浜?Softmax 鍑芥暟浣滀负婵€娲诲嚱鏁般€傝鍑芥暟鐨勭壒鐐规槸鍙互灏嗚緭鍑哄€艰浆鎹负 0-1 涔嬮棿鐨勪竴涓皬鏁板€硷紝骞朵笖杩欎簺灏忔暟鍊肩殑鍜屼负 1銆備簬鏄濂藉彲浠ョ敤杩欎簺灏忔暟琛ㄧず姣忎釜绫诲埆鐨勫彲鑳芥€у垎甯冩儏鍐点€傚亣濡傚垰鎵嶆彁鍒扮殑涓変釜绫诲埆鍘熸湰鐨勮緭鍑哄€间负 1.2銆?.9 鍜?0.4锛屽垯閫氳繃 Softmax 鍑芥暟鐨勫鐞嗗悗锛屽緱鍒扮殑缁撴灉涓猴細</p> 
<p class=璇︾粏浠嬬粛tensorflow 绁炵粡缃戠粶鍒嗙被妯″瀷鏋勫缓鍏ㄨ繃绋嬶細浠ユ枃鏈垎绫讳负渚?></p> 
<p><br></p> 
<p>鍙互鐪嬪埌杩欎笁涓皬鏁扮殑鍜屾濂戒负 1銆?/p> 
<p>鍒扮洰鍓嶄负姝紝鎴戜滑宸茬粡鏄庣‘浜嗚绁炵粡缃戠粶鐨勬暟鎹祦鍥撅紝涓嬮潰涓哄叿浣撶殑浠g爜瀹炵幇锛?/p> 
<p><br></p> 
<blockquote> 
 <p># Network Parameters<br>n_hidden_1 = 10        # 1st layer number of features<br>n_hidden_2 = 5         # 2nd layer number of features<br>n_input = total_words  # Words in vocab<br>n_classes = 3          # Categories: graphics, space and baseball</p> 
 <p><br></p> 
 <p>def multilayer_perceptron(<span>input_tensor, weights, biases</strong>):</p> 
 <p><br>    <span>layer_1_multiplication</strong> = tf.matmul(input_tensor, weights['h1'])<br>    <span>layer_1_addition </strong>= tf.add(layer_1_multiplication, biases['b1'])<br>    <span>layer_1_activation</strong> = tf.nn.relu(layer_1_addition)</p> 
 <p><br></p> 
 <p># Hidden layer with RELU activation<br>    <span>layer_2_multiplication </strong>= tf.matmul(layer_1_activation, weights['h2'])<br>    <span>layer_2_addition </strong>= tf.add(layer_2_multiplication, biases['b2'])<br>    <span>layer_2_activation</strong> = tf.nn.relu(layer_2_addition)</p> 
 <p><br></p> 
 <p># Output layer with linear activation<br>    <span>out_layer_multiplication </strong>= tf.matmul(layer_2_activation, weights['out'])<br>    <span>out_layer_addition </strong>= out_layer_multiplication + biases['out']</p> 
 <p>return out_layer_addition<br></p> 
 <p><br></p> 
 <p><br></p> 
</blockquote> 
<h2><span class=  绁炵粡缃戠粶鐨勮缁?/h2>

濡傚墠鎵€杩帮紝妯″瀷璁粌涓竴椤归潪甯搁噸瑕佺殑浠诲姟灏辨槸璋冩暣缁撶偣鐨勬潈閲嶃€傛湰鑺傛垜浠皢浠嬬粛濡備綍鍦?TensorFlow 涓疄鐜拌繖涓€杩囩▼銆?/p>


鍦?TensorFlow 涓紝缁撶偣鏉冮噸鍜屽亸宸€间互鍙橀噺鐨勫舰寮忓瓨鍌紝鍗?tf.Variable 瀵硅薄銆傚湪鏁版嵁娴佸浘璋冪敤 run() 鍑芥暟鐨勬椂鍊欙紝杩欎簺鍊煎皢淇濇寔涓嶅彉銆傚湪涓€鑸殑鏈哄櫒瀛︿範鍦烘櫙涓紝鏉冮噸鍊煎拰鍋忓樊鍊肩殑鍒濆鍙栧€奸兘閫氳繃姝eお鍒嗗竷纭畾銆傚叿浣撲唬鐮佸涓嬪浘鎵€绀猴細


weights = {

    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),

    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),

    'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))

}

biases = {

    'b1': tf.Variable(tf.random_normal([n_hidden_1])),

    'b2': tf.Variable(tf.random_normal([n_hidden_2])),

    'out': tf.Variable(tf.random_normal([n_classes]))

}


浠ュ垵濮嬪€艰繍琛岀缁忕綉缁滀箣鍚庯紝浼氬緱鍒颁竴涓疄闄呰緭鍑哄€?z锛岃€屾垜浠殑鏈熸湜杈撳嚭鍊兼槸 expected锛岃繖鏃舵垜浠渶瑕佸仛鐨勫氨鏄绠椾袱鑰呬箣闂寸殑璇樊锛屽苟閫氳繃璋冩暣鏉冮噸绛夊弬鏁颁娇涔嬫渶灏忓寲銆備竴鑸绠楄宸殑鏂规硶鏈夊緢澶氾紝杩欓噷鍥犱负鎴戜滑澶勭悊鐨勬槸鍒嗙被闂锛屽洜姝ら噰鐢ㄤ氦鍙夌喌璇樊銆?/p>


鍦?TensorFlow 涓紝鎴戜滑鍙互閫氳繃璋冪敤 tf.nn.softmax_cross_entropy_with_logits() 鍑芥暟鏉ヨ绠椾氦鍙夌喌璇樊锛屽洜涓鸿繖閲屾垜浠殑婵€娲诲嚱鏁伴€夋嫨浜?Softmax 锛屽洜姝よ宸嚱鏁颁腑鍑虹幇浜?softmax_ 鍓嶇紑銆傚叿浣撲唬鐮佸涓嬶紙浠g爜涓垜浠悓鏃惰皟鐢ㄤ簡 


tf.reduced_mean() 鍑芥暟鏉ヨ绠楀钩鍧囪宸級锛?/p>


# Construct model

prediction = multilayer_perceptron(input_tensor, weights, biases)

# Define loss

entropy_loss = tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=output_tensor)

loss = tf.reduce_mean(entropy_loss)


寰楀埌璇樊涔嬪悗锛屼笅闈㈢殑浠诲姟鏄浣曚娇涔嬫渶灏忓寲銆傝繖閲屾垜浠€夋嫨鐨勬柟娉曟槸鏈€甯哥敤鐨勯殢鏈烘搴︿笅闄嶆硶锛屽叾鐩磋鐨勫師鐞嗗浘濡備笅鎵€绀猴細

璇︾粏浠嬬粛tensorflow 绁炵粡缃戠粶鍒嗙被妯″瀷鏋勫缓鍏ㄨ繃绋嬶細浠ユ枃鏈垎绫讳负渚?></p> 
<p>鍚屾牱锛岀敤鏉ヨ绠楁搴︿笅闄嶇殑鏂规硶涔熸湁寰堝锛岃繖閲屾垜浠噰鐢ㄤ簡 Adaptive Moment Estimation (Adam) 浼樺寲娉曪紝鍗宠嚜閫傚簲鐭╀及璁$殑浼樺寲鏂规硶锛屽叿浣撳湪 TensorFlow 涓殑浣撶幇鏄?tf.train.AdamOptimizer(learning_rate).minimize(loss) 鍑芥暟銆傝繖閲屾垜浠渶瑕佷紶鍏?learning_rate 鍙傛暟浠ュ喅瀹氳绠楁搴︽椂鐨勬杩涢暱搴︺€?/p> 
<p><br></p> 
<p>闈炲父鏂逛究鐨勪竴鐐规槸锛孉damOptimizer() 鍑芥暟灏佽浜嗕袱绉嶅姛鑳斤細涓€鏄绠楁搴︼紝浜屾槸鏇存柊姊害銆傛崲鍙ヨ瘽璇达紝璋冪敤璇ュ嚱鏁颁笉浣嗚兘璁$畻姊害鍊硷紝杩樿兘灏嗚绠楃粨鏋滄洿鏂板埌鎵€鏈?tf.Variables 瀵硅薄涓紝杩欎竴鐐瑰ぇ澶ч檷浣庝簡缂栫▼澶嶆潅搴︺€?/p> 
<p><br></p> 
<p>鍏蜂綋妯″瀷璁粌閮ㄥ垎鐨勪唬鐮佸涓嬫墍绀猴細</p> 
<blockquote> 
 <p>learning_rate = 0.001</p> 
 <p># Construct model</p> 
 <p>prediction = multilayer_perceptron(input_tensor, weights, biases)</p> 
 <p># Define loss</p> 
 <p>entropy_loss = tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=output_tensor)</p> 
 <p><span>loss = tf.reduce_mean(entropy_loss)</strong></p> 
 <p><span>optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)</strong></p> 
</blockquote> 
<h2><span class=

  鏁版嵁澶勭悊

鏈緥涓紝鎴戜滑寰楀埌鐨勫師濮嬫暟鎹槸璁稿鑻辨枃鐨勬枃鏈墖娈碉紝涓轰簡灏嗚繖浜涙暟鎹鍏ユā鍨嬩腑锛屾垜浠渶瑕佸鍘熷鏁版嵁杩涜蹇呰鐨勯澶勭悊杩囩▼銆傝繖閲屽叿浣撳寘鎷袱涓儴鍒嗭細

鈼?涓烘瘡涓崟璇嶇紪鐮侊紱

鈼?涓烘瘡涓枃鏈墖娈靛垱寤哄搴旂殑寮犻噺琛ㄧず锛屽叾涓互鏁板瓧 1 浠h〃鍑虹幇浜嗘煇涓崟璇嶏紝0 琛ㄧず娌℃湁璇ュ崟璇嶃€?/p>


鍏蜂綋瀹炵幇浠g爜濡備笅锛?/p>

import numpy as np    #numpy is a package for scientific computing

from collections import Counter

vocab = Counter()

text = "Hi from Brazil"

#Get all words

for word in text.split(' '):

    vocab[word]+=1

        

#Convert words to indexes

def get_word_2_index(vocab):

    word2index = {}

    for i,word in enumerate(vocab):

        word2index[word] = i

        

    return word2index

#Now we have an index

word2index = get_word_2_index(vocab)

total_words = len(vocab)

#This is how we create a numpy array (our matrix)

matrix = np.zeros((total_words),dtype=float)

#Now we fill the values

for word in text.split():

    matrix[word2index[word]] += 1

print(matrix)

>>> [ 1.  1.  1.]



浠庝互涓婁唬鐮佸彲浠ョ湅鍒帮紝褰撹緭鍏ユ枃鏈槸鈥淗i from Brazil鈥濇椂锛岃緭鍑虹煩闃垫槸 

[ 1. 1. 1.]銆傝€屽綋杈撳叆鏂囨湰鍙湁鈥淗i鈥濇椂鍙堜細鎬庝箞鏍峰憿锛屽叿浣撲唬鐮佸拰缁撴灉濡備笅锛?/p>


matrix = np.zeros((total_words),dtype=float)

text = "Hi"

for word in text.split():

    matrix[word2index[word.lower()]] += 1

print(matrix)

>>> [ 1.  0.  0.]


鍙互鐪嬪埌锛岃繖鏃剁殑杈撳嚭鏄?[ 1.  0.  0.]銆?/p>


鐩稿簲鐨勶紝鎴戜滑涔熷彲浠ュ绫诲埆淇℃伅杩涜缂栫爜锛屽彧涓嶈繃杩欐椂浣跨敤鐨勬槸鐙儹缂栫爜锛?/p>

y = np.zeros((3),dtype=float)

if category == 0:

    y[0] = 1.        # [ 1.  0.  0.]

elif category == 1:

    y[1] = 1.        # [ 0.  1.  0.]

else:

     y[2] = 1.       # [ 0.  0.  1.]


  杩愯妯″瀷骞堕娴?/h2>

鑷虫鎴戜滑宸茬粡瀵?TensorFlow銆佺缁忕綉缁滄ā鍨嬨€佹ā鍨嬭缁冨拰鏁版嵁棰勫鐞嗙瓑鏂归潰鏈変簡鍒濇鐨勪簡瑙o紝涓嬮潰鎴戜滑灏嗘紨绀哄浣曞皢杩欎簺鐭ヨ瘑搴旂敤浜庡疄闄呯殑鏁版嵁銆?/p>


http://t.cn/zY6ssrE 


棣栧厛锛屼负浜嗗鍏ヨ繖浜涙暟鎹泦锛屾垜浠渶瑕佸€熷姪 scikit-learn 搴撱€傚畠涔熸槸涓紑婧愮殑鍑芥暟搴擄紝鍩轰簬 Python 璇█锛屼富瑕佽繘琛屾満鍣ㄥ涔犵浉鍏崇殑鏁版嵁澶勭悊浠诲姟銆傛湰渚嬩腑鎴戜滑鍙娇鐢ㄤ簡鍏朵腑鐨勪笁涓被锛歝omp.graphics锛宻ci.space 鍜?rec.sport.baseball銆?/p>

鏈€缁堟暟鎹細琚垎涓轰袱涓瓙闆嗭紝涓€涓槸鏁版嵁璁粌闆嗭紝涓€涓槸娴嬭瘯闆嗐€傝繖閲岀殑寤鸿鏄渶濂戒笉瑕佹彁鍓嶆煡鐪嬫祴璇曟暟鎹泦銆傚洜涓烘彁鍓嶆煡鐪嬫祴璇曟暟鎹細褰卞搷鎴戜滑瀵规ā鍨嬪弬鏁扮殑閫夋嫨锛屼粠鑰屽奖鍝嶆ā鍨嬪鍏朵粬鏈煡鏁版嵁鐨勯€氱敤鎬с€?/p>


鍏蜂綋鐨勬暟鎹鍏ヤ唬鐮佸涓嬶細

from sklearn.datasets import fetch_20newsgroups

categories = ["comp.graphics","sci.space","rec.sport.baseball"]

newsgroups_train = fetch_20newsgroups(subset='train', categories=categories)

newsgroups_test = fetch_20newsgroups(subset='test', categories=categories)


鍦ㄧ缁忕綉缁滄湳璇腑锛屼竴涓?epoch 杩囩▼灏辨槸瀵规墍鏈夎缁冩暟鎹殑涓€涓墠鍚戜紶閫掞紙forward pass锛夊姞鍚庡悜浼犻€掞紙backward pass锛夌殑瀹屾暣寰幆銆傝繖閲屽墠鍚戞槸鎸囨牴鎹幇鏈夋潈閲嶅緱鍒板疄闄呰緭鍑哄€肩殑杩囩▼锛屽悗鍚戞槸鎸囨牴鎹宸粨鏋滃弽杩囨潵璋冩暣鏉冮噸鐨勮繃绋嬨€備笅闈㈡垜浠噸鐐逛粙缁嶄竴涓?tf.Session.run() 鍑芥暟锛屽疄闄呬笂瀹冪殑瀹屾暣璋冪敤褰㈠紡濡備笅锛?/p>


tf.Session.run(fetches, feed_dict=None, options=None, run_metadata=None)


鍦ㄦ枃绔犲紑澶翠粙缁嶈鍑芥暟鏃讹紝鎴戜滑鍙€氳繃 fetches 鍙傛暟浼犲叆浜嗗姞娉曟搷浣滐紝浣嗗叾瀹炲畠杩樻敮鎸佷竴娆′紶鍏ュ绉嶆搷浣滅殑鐢ㄦ硶銆傚湪闈㈠悜瀹為檯鏁版嵁鐨勬ā鍨嬭缁冪幆鑺傦紝鎴戜滑灏变紶鍏ヤ簡涓ょ鎿嶄綔锛氫竴涓槸璇樊璁$畻锛堝嵆闅忔満姊害涓嬮檷锛夛紝鍙︿竴涓槸浼樺寲鍑芥暟锛堝嵆鑷€傚簲鐭╀及璁★級銆?/p>


run() 鍑芥暟涓彟涓€涓噸瑕佺殑鍙傛暟鏄?feed_dict锛屾垜浠氨鏄€氳繃杩欎釜鍙傛暟浼犲叆妯″瀷姣忔澶勭悊鐨勮緭鍏ユ暟鎹€傝€屼负浜嗚緭鍏ユ暟鎹紝鎴戜滑鍙堝繀椤诲厛瀹氫箟 tf.placeholders銆?/p>

鎸夌収瀹樻柟鏂囨。鐨勮В閲婏紝杩欓噷 placeholder 浠呬粎鏄竴涓┖瀹紝鐢ㄤ簬寮曠敤鍗冲皢瀵煎叆妯″瀷鐨勬暟鎹紝鏃笉闇€瑕佸垵濮嬪寲锛屼篃涓嶅瓨鏀剧湡瀹炵殑鏁版嵁銆傛湰渚嬩腑瀹氫箟 tf.placeholders 鐨勪唬鐮佸涓嬶細


n_input = total_words # Words in vocab

n_classes = 3         # Categories: graphics, sci.space and baseball

input_tensor = tf.placeholder(tf.float32,[None, n_input],name="input")

output_tensor = tf.placeholder(tf.float32,[None, n_classes],name="output")



鍦ㄨ繘琛屽疄闄呯殑妯″瀷璁粌涔嬪墠锛岃繕闇€瑕佸皢鏁版嵁鍒嗘垚 batch锛屽嵆涓€娆¤绠楀鐞嗘暟鎹殑閲忋€?/p>


杩欐椂灏变綋鐜颁簡涔嬪墠瀹氫箟 tf.placeholders 鐨勫ソ澶勶紝鍗冲彲浠ラ€氳繃 placeholders 瀹氫箟涓殑鈥淣one鈥濆弬鏁版寚瀹氫竴涓淮搴﹀彲鍙樼殑 batch銆備篃灏辨槸璇达紝batch 鐨勫叿浣撳ぇ灏忓彲浠ョ瓑鍚庨潰浣跨敤鏃跺啀纭畾銆傝繖閲屾垜浠湪妯″瀷璁粌闃舵浼犲叆鐨?batch 鏇村ぇ锛岃€屾祴璇曢樁娈靛彲鑳戒細鍋氫竴浜涙敼鍙橈紝鍥犳闇€瑕佷娇鐢ㄥ彲鍙?batch銆傞殢鍚庡湪璁粌涓紝鎴戜滑閫氳繃 get_batches() 鍑芥暟鏉ヨ幏鍙栨瘡娆″鐞嗙殑鐪熷疄鏂囨湰鏁版嵁銆傚叿浣撴ā鍨嬭缁冮儴鍒嗙殑浠g爜濡備笅锛?/p>


training_epochs = 10

# Launch the graph

with tf.Session() as sess:

    sess.run(init) #inits the variables (normal distribution, remember?)

    # Training cycle

    for epoch in range(training_epochs):

        avg_cost = 0.

        total_batch = int(len(newsgroups_train.data)/batch_size)

        # Loop over all batches

        for i in range(total_batch):

            batch_x,batch_y = get_batch(newsgroups_train,i,batch_size)

            # Run optimization op (backprop) and cost op (to get loss value)

            c,_ = sess.run([loss,optimizer], feed_dict={input_tensor: batch_x, output_tensor:batch_y})


鑷虫鎴戜滑宸茬粡閽堝瀹為檯鏁版嵁瀹屾垚浜嗘ā鍨嬭缁冿紝涓嬮潰鍒颁簡搴旂敤娴嬭瘯鏁版嵁瀵规ā鍨嬭繘琛屾祴璇曠殑鏃跺€欍€傚湪娴嬭瘯杩囩▼涓紝鍜屼箣鍓嶇殑璁粌閮ㄥ垎绫讳技锛屾垜浠悓鏍疯瀹氫箟鍥惧厓绱狅紝鍖呮嫭鎿嶄綔鍜屾暟鎹袱绫汇€傝繖閲屼负浜嗚绠楁ā鍨嬬殑绮惧害锛屽悓鏃惰繕鍥犱负鎴戜滑瀵圭粨鏋滃紩鍏ヤ簡鐙儹缂栫爜锛屽洜姝ら渶瑕佸悓鏃跺緱鍒版纭緭鍑虹殑绱㈠紩锛屼互鍙婇娴嬭緭鍑虹殑绱㈠紩锛屽苟妫€鏌ュ畠浠槸鍚︾浉绛夛紝濡傛灉涓嶇瓑锛岃璁$畻鐩稿簲鐨勫钩鍧囪宸€傚叿浣撳疄鐜颁唬鐮佸拰缁撴灉濡備笅锛?/p>


# Test model

    index_prediction = tf.argmax(prediction, 1)

    index_correct = tf.argmax(output_tensor, 1)

    correct_prediction = tf.equal(index_prediction, index_correct)

    # Calculate accuracy

    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

    total_test_data = len(newsgroups_test.target)

    batch_x_test,batch_y_test = get_batch(newsgroups_test,0,total_test_data)

    print("Accuracy:", accuracy.eval({input_tensor: batch_x_test, output_tensor: batch_y_test}))

>>> Epoch: 0001 loss= 1133.908114347

    Epoch: 0002 loss= 329.093700409

    Epoch: 0003 loss= 111.876660109

    Epoch: 0004 loss= 72.552971845

    Epoch: 0005 loss= 16.673050320

    Epoch: 0006 loss= 16.481995190

    Epoch: 0007 loss= 4.848220565

    Epoch: 0008 loss= 0.759822878

    Epoch: 0009 loss= 0.000000000

    Epoch: 0010 loss= 0.079848485

    Optimization Finished!

    Accuracy: 0.75


鏈€缁堝彲浠ョ湅鍒帮紝鎴戜滑鐨勬ā鍨嬮娴嬬簿搴﹁揪鍒颁簡 75%锛屽浜庡垵瀛﹁€呰€岃█锛岃繖涓垚缁╄繕鏄笉閿欑殑銆傝嚦姝わ紝鎴戜滑宸茬粡閫氳繃 TensorFlow 瀹炵幇浜嗗熀浜庣缁忕綉缁滄ā鍨嬬殑鏂囨湰鍒嗙被浠诲姟銆?/p>


浜哄伐鏅鸿兘澶ф暟鎹笌娣卞害瀛︿範



澶ф暟鎹寲鎺楧T鏁版嵁鍒嗘瀽


鏁欎綘鏈哄櫒瀛︿範锛屾暀浣犳暟鎹寲鎺?/strong>

以上是关于的主要内容,如果未能解决你的问题,请参考以下文章

Python 操作Redis

python爬虫入门----- 阿里巴巴供应商爬虫

Python词典设置默认值小技巧

《python学习手册(第4版)》pdf

Django settings.py 的media路径设置

Python中的赋值,浅拷贝和深拷贝的区别