Posted NewBeeNLP

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了相关的知识,希望对你有一定的参考价值。

NewBeeNLP鍘熷垱鍑哄搧   

鎮犻棽浼?nbsp;路 淇℃伅妫€绱?/p>


涓婃鎴戜滑鐪嬩簡銆庢帹鑽愮郴缁?+ GNN銆?nbsp;


浠婂ぉ鏉ョ湅鐪?span>銆庢帹鑽愮郴缁?+ 鐭ヨ瘑鍥捐氨銆?/strong>锛屽張浼氭湁鍝簺鏈夎叮鐨勭帺鎰忓効鍛?nbsp;馃摦

Knowledge Graph

鐭ヨ瘑鍥捐氨鏄竴绉嶈涔夊浘锛屽叾缁撶偣锛坣ode锛変唬琛ㄥ疄浣擄紙entity锛夋垨鑰呮蹇碉紙concept锛夛紝杈癸紙edge锛変唬琛ㄥ疄浣?姒傚康涔嬮棿鐨勫悇绉嶈涔夊叧绯伙紙relation锛夈€備竴涓煡璇嗗浘璋辩敱鑻ュ共涓笁鍏冪粍锛坔銆乺銆乼锛夌粍鎴愶紝鍏朵腑h鍜宼浠h〃涓€鏉″叧绯荤殑澶寸粨鐐瑰拰灏捐妭鐐癸紝r浠h〃鍏崇郴銆?/p>

寮曞叆鐭ヨ瘑鍥捐氨杩涘叆鎺ㄨ崘绯荤粺棰嗗煙鐨勪紭鐐瑰湪浜庯細

  • 銆岀簿纭€э紙precision锛夈€?/strong>锛氫负鐗╁搧item寮曞叆浜嗘洿澶氱殑璇箟鍏崇郴锛屽彲浠ユ繁灞傛鍦板彂鐜扮敤鎴峰叴瓒?
  • 銆屽鏍锋€э紙diversity锛夈€?/strong>锛氭彁渚涗簡涓嶅悓鐨勫叧绯昏繛鎺ョ绫伙紝鏈夊埄浜庢帹鑽愮粨鏋滅殑鍙戞暎锛岄伩鍏嶆帹鑽愮粨鏋滃眬闄愪簬鍗曚竴绫诲瀷
  • 銆屽彲瑙i噴鎬э紙explainability锛夈€?/strong>锛氳繛鎺ョ敤鎴风殑鍘嗗彶璁板綍鍜屾帹鑽愮粨鏋滐紝浠庤€屾彁楂樼敤鎴峰鎺ㄨ崘缁撴灉鐨勬弧鎰忓害鍜屾帴鍙楀害锛屽寮虹敤鎴峰鎺ㄨ崘绯荤粺鐨勪俊浠汇€?

浣嗘槸鐭ヨ瘑鍥捐氨闅句互涓庣缁忕綉缁滅洿鎺ョ粨鍚堬紝鎵€浠ュ紩鍑轰簡銆宬nowledge representation learning銆?/strong>锛岄€氳繃瀛︿範entity鍜宺elation鐨別mbedding涔嬪悗锛屽啀宓屽叆鍒扮缁忕綉缁滀腑銆俥mbedding鏂规硶涓昏鍙互鍒嗕负銆宼ranslational distance銆?/strong>鏂规硶鍜?span>銆宻emantic matching銆?/strong>鏂规硶涓ょ锛屽墠鑰呮槸瀛︿範浠庡ご瀹炰綋鍒板熬瀹炰綋鐨勭┖闂村叧绯诲彉鎹紙濡俆ransE绛夌郴鍒楋級锛屽悗鑰呭垯鏄洿鎺ョ敤绁炵粡缃戠粶瀵硅涔夌浉浼煎害杩涜璁$畻銆?/p>

灏嗗叾缁撳悎鍒版帹鑽愰噷闈㈡瘮杈冨洶闅剧殑鍦版柟浠嶇劧鏈夛細

  • 銆屽浘绠€鍖栥€?/strong> 濡備綍澶勭悊KG甯︽潵鐨勫绉嶅疄浣撳拰鍏崇郴锛屾寜闇€瑕佺畝鍖栬櫧鐒跺彲鑳戒細鎹熷け閮ㄥ垎淇℃伅浣嗗鏁堢巼鏄繀瑕佺殑锛屽鍙user-user鎴栬€卛tem-item鍏崇郴绠€鍖栧瓙鍥俱€?
  • 銆屽鍏崇郴浼犳挱銆?/strong>KG鐨勭壒鐐瑰氨鏄鍏崇郴锛屼笉杩囩幇鏈夊彲浠ョ敤attention鏉ュ尯鍒嗕笉鍚屽叧绯荤殑閲嶈鎬э紝涓洪偦灞呭姞鏉冦€?
  • 銆岀敤鎴锋暣鍚堛€?/strong>灏嗚鑹插紩鍏ュ浘缁撴瀯锛岀敱浜嶬G鏄閮ㄤ俊鍙凤紝浣嗘槸鍚︿篃鍙互灏嗙敤鎴蜂篃铻嶅叆涓轰竴绉嶅疄浣撳彉鎴愬唴鍦ㄤ骇鐗╁憿锛? 娣卞害铻嶅悎 | 褰撴帹鑽愮郴缁熼亣瑙佺煡璇嗗浘璋?> 
   </section></li> 
 </ul> 
 <p data-tool=涓€鑸娇鐢ㄧ煡璇嗗浘璋辨湁涓夌妯″紡锛屽涓婂浘锛?/p>
    • 銆屼緷娆″涔狅紙one-by-one learning锛夈€?/strong> 浣跨敤鐭ヨ瘑鍥捐氨鐗瑰緛瀛︿範寰楀埌瀹炰綋鍚戦噺鍜屽叧绯诲悜閲忥紝鐒跺悗灏嗚繖浜涗綆缁村悜閲忥紙TransR鏂规硶绛夛級锛屽紩鍏ユ帹鑽愮郴缁熷啀鍋氬悗闈㈢殑澶勭悊銆傚嵆鍙妸鐭ヨ瘑鍥捐氨浣滀负涓€涓猻ide info锛屽涓€缁寸壒寰佺殑澶勭悊鏂瑰紡銆?
    • 銆岃仈鍚堝涔狅紙joint learning锛夈€?/strong> 灏嗙煡璇嗗浘璋辩壒寰佸涔犲拰鎺ㄨ崘绠楁硶鐨勭洰鏍囧嚱鏁扮粨鍚堬紝浣跨敤绔埌绔紙end-to-end锛夌殑鏂规硶杩涜鑱斿悎瀛︿範銆傚嵆鎶婄煡璇嗗浘璋辩殑鎹熷け涔熺撼鍏ュ埌鏈€鍚庣殑鎹熷け鍑芥暟鑱斿悎璁粌銆?
    • 銆屼氦鏇垮涔狅紙alternate learning锛夈€?/strong> 灏嗙煡璇嗗浘璋辩壒寰佸涔犲拰鎺ㄨ崘绠楁硶瑙嗕负涓や釜鍒嗙浣嗗張鐩稿叧鐨勪换鍔★紝浣跨敤澶氫换鍔″涔狅紙multi-task learning锛夌殑妗嗘灦杩涜浜ゆ浛瀛︿範銆傝繖鏍峰彲浠ヨKG鍜孯C鍦ㄦ煇绉嶇▼搴︿笂铻嶅悎鐨勬洿鍔犳繁鍏ャ€?

    鍦ㄤ粙缁嶈鏂囦箣鍓嶏紝鍏堢畝瑕佺湅鐪嬩竴鑸涔犵煡璇嗗浘璋辩殑鏂规硶锛屼竴鑸湁鍑犵濡備笅鐨勫鐞嗘柟寮忥細

    • 銆孴ransE銆?/strong>锛屽嵆浣垮叾婊¤冻 h + r 鈮?t锛屽熬瀹炰綋鏄ご瀹炰綋閫氳繃鍏崇郴骞崇Щ(缈昏瘧)寰楀埌鐨勶紝浣嗗畠涓嶉€傚悎澶氬涓€鍜屽瀵瑰锛屾墍浠ュ鑷碩ransE鍦ㄥ鏉傚叧绯讳笂鐨勮〃鐜板樊銆傚叕寮忓涓?/p>

    • 銆孴ransH妯″瀷銆?/strong>锛屽嵆灏嗗疄浣撴姇褰卞埌鐢卞叧绯绘瀯鎴愮殑瓒呭钩闈笂銆傚€煎緱娉ㄦ剰鐨勬槸瀹冩槸闈炲绉版槧灏?/p>

    • 銆孴ransR妯″瀷銆?/strong>锛岃妯″瀷鍒欒涓哄疄浣撳拰鍏崇郴瀛樺湪璇箟宸紓锛屽畠浠簲璇ュ湪涓嶅悓鐨勮涔夌┖闂淬€傛澶栵紝涓嶅悓鐨勫叧绯诲簲璇ユ瀯鎴愪笉鍚岀殑璇箟绌洪棿锛屽洜姝ransR閫氳繃鍏崇郴鎶曞奖鐭╅樀锛屽皢瀹炰綋绌洪棿杞崲鍒扮浉搴旂殑鍏崇郴绌洪棿銆?/p>

      娣卞害铻嶅悎 | 褰撴帹鑽愮郴缁熼亣瑙佺煡璇嗗浘璋?> 
    </figure> 
   </section></li> 
  <li> 
   <section class=

      銆孴ransD妯″瀷銆?/strong>锛岃妯″瀷璁や负澶村熬瀹炰綋鐨勫睘鎬ч€氬父鏈夋瘮杈冨ぇ鐨勫樊寮傦紝鍥犳瀹冧滑搴旇鎷ユ湁涓嶅悓鐨勫叧绯绘姇褰辩煩闃点€傛澶栬繕鑰冭檻鐭╅樀杩愮畻姣旇緝鑰楁椂锛孴ransD灏嗙煩闃典箻娉曟敼鎴愪簡鍚戦噺涔樻硶锛屼粠鑰屾彁鍗囦簡杩愮畻閫熷害銆?/p>

    • 銆孨TN妯″瀷銆?/strong>锛屽皢姣忎竴涓疄浣撶敤鍏跺疄浣撳悕绉扮殑璇嶅悜閲忓钩鍧囧€兼潵琛ㄧず锛屽彲浠ュ叡浜浉浼煎疄浣撳悕绉颁腑鐨勬枃鏈俊鎭€?/p>

    鎺ヤ笅鏉ヤ富瑕佹暣鐞?绡囪鏂囷紝CKE鍜孯ippleNet銆?/p>

    CKE

    • 璁烘枃锛欳ollaborative Knowledge base Embedding
    • 鍦板潃锛歨ttps://www.kdd.org/kdd2016/papers/files/adf0066-zhangA.pdf
    • 涔熷彲浠ョ洿鎺ュ湪鍏紬鍙峰悗鍙板洖澶嶃€?019銆忕洿鎺ヨ幏鍙?

    鍙戣嚜16骞碖DD锛屽皢KG涓嶤F铻嶅悎鍋氳仈鍚堣缁冦€?img data-ratio="0.5303571428571429" src="/img?url=https://mmbiz.qpic.cn/mmbiz_jpg/DHibuUfpZvQfUor9UJIChhTZNVWOpiclZz0Mj8Q3RVBPNQ6EHJbKVkDM9ibaaBrlsYf1IZO8uWZ0Va8JXa2MKIJwQ/640?wx_fmt=jpeg" data-type="jpeg" data-w="1120" _width="368px" class="mq-65" alt="娣卞害铻嶅悎 | 褰撴帹鑽愮郴缁熼亣瑙佺煡璇嗗浘璋?>棣栧厛涓轰簡浣跨敤鐭ヨ瘑搴擄紝浣滆€呰璁′簡涓変釜缁勪欢鍒嗗埆浠庣粨鏋勫寲鐭ヨ瘑锛屾枃鏈煡璇嗗拰瑙嗚鐭ヨ瘑涓彁鍙栬涔夌壒寰侊紝濡備笂鍥句腑鐨勫彸鍗婇儴鍒嗭紝鐭ヨ瘑搴撶殑澶勭悊鍒嗗埆涓猴細

    缁撴瀯鍖栫煡璇?/span>

    鐭ヨ瘑搴撲腑鐨勫疄浣撲互鍙婂疄浣撶殑鑱旂郴銆備娇鐢═ransR鎻愬彇鐗╁搧鐨勭粨鏋勫寲淇℃伅锛堝悓鏃惰€冭檻nodes鍜宺elations锛夛紝瀹冪殑缁撴瀯濡備笅鍥撅紝瀵逛簬姣忎釜鍏冪粍锛坔锛宺锛宼锛夛紝棣栧厛灏嗗疄浣撶┖闂翠腑鐨勫疄浣撳悜鍏崇郴r鎶曞奖寰楀埌 鍜?span class="mq-74"> 锛岀劧鍚庝娇 锛岃兘澶熶娇寰楀ご/灏惧疄浣撳湪杩欎釜鍏崇郴r涓嬮潬杩戝郊姝わ紝浣垮緱涓嶅叿鏈夋鍏崇郴r鐨勫疄浣撳郊姝よ繙绂汇€?/p>

    鏂囨湰鐭ヨ瘑

    瀵瑰疄浣撶殑鏂囧瓧鎬ф弿杩般€傜敤澶氬眰闄嶅櫔鑷紪鐮佸櫒鎻愬彇鏂囨湰琛ㄨ揪锛圫DAE锛夛紝鍥句腑鍐欑殑鏄疊ayesian SDAE锛屾剰鎬濆氨鏄鏉冮噸锛屽亸缃紝杈撳嚭灞傜鍚堢壒瀹氱殑姝f€佸垎甯冿紝瀵硅瑙夌殑澶勭悊涔熸槸涓€鏍风殑銆?/p>

    瑙嗚鐭ヨ瘑

    瀵瑰疄浣撶殑鍥剧墖鎻忚堪濡傛捣鎶ョ瓑銆傜敤澶氬眰鍗风Н鑷紪鐮佹彁鍙栫墿鍝佽瑙夎〃杈撅紙SCAE锛?/p>

    鏈€鍚庡緱鍒扮殑item鐨勮〃绀轰负offset鍚戦噺浠ュ強缁撴瀯鍖栫煡璇嗭紝鏂囨湰鐭ヨ瘑锛屽浘鐗囩煡璇嗙殑鍚戦噺锛?/p>

    娣卞害铻嶅悎 | 褰撴帹鑽愮郴缁熼亣瑙佺煡璇嗗浘璋?>鐒跺悗浠庣煡璇嗗簱涓彁鍙栫殑鐗瑰緛铻嶅悎鍒癱ollabrative filtering 涓幓锛屽嵆涓庡乏杈圭殑鐢ㄦ埛鍙嶉缁撳悎璧锋潵涓€璧峰仛CF杩涜璁粌灏卞彲浠ヤ簡锛岃缁冩崯澶卞嚱鏁颁細鐢╬air-wise鐨勫亸搴忎紭鍖栥€?/p> 
 <pre data-tool=#TransR
    def projection_transR_pytorch(original, proj_matrix):
        ent_embedding_size = original.shape[1]
        rel_embedding_size = proj_matrix.shape[1] // ent_embedding_size
        original = original.view(-1, ent_embedding_size, 1)
        #鍊熷姪涓€涓姇褰辩煩闃靛氨琛?/span>
        proj_matrix = proj_matrix.view(-1, rel_embedding_size, ent_embedding_size)
        return torch.matmul(proj_matrix, original).view(-1, rel_embedding_size)


    RippleNet

    • 璁烘枃锛歊ippleNet: Propagating User Preferences on the Knowledge Graph for Recommender Systems
    • 鍦板潃锛歨ttps://arxiv.org/abs/1803.03467
    • 涔熷彲浠ュ湪鍏紬鍙峰悗鍙板洖澶嶃€?020銆忕洿鎺ヨ幏鍙?

    鍚戞潵涓嶅悓鎶€鏈箣闂村鏋滆兘铻嶅悎鐨勬洿娣卞叆锛岃嚜鐒舵槸鑳藉緱鍒版洿濂界殑淇℃伅銆俁ipple Network妯℃嫙浜嗙敤鎴峰叴瓒e湪鐭ヨ瘑鍥捐氨涓婄殑浼犳挱杩囩▼锛屾暣涓繃绋嬬被浼间簬姘存尝鐨勪紶鎾紝濡備笂鍥句粠瀹炰綋Forrest Gump寮€濮嬩竴璺砲op1锛屼簩璺砲op2鍋氫紶鎾紝鍚屾椂鏉冮噸閫掑噺銆?img data-ratio="0.5915300546448088" src="/img?url=https://mmbiz.qpic.cn/mmbiz_png/DHibuUfpZvQfUor9UJIChhTZNVWOpiclZza3ib4zMx0ICwBH4mtRtnGctEMiaA99eNjz7xR7jht4ZGS6oWTSexvagA/640?wx_fmt=png" data-type="png" data-w="732" _width="368px" class="mq-127" alt="娣卞害铻嶅悎 | 褰撴帹鑽愮郴缁熼亣瑙佺煡璇嗗浘璋?>妯″瀷鍥惧涓嬪浘锛屽浜庣粰瀹氱殑鐢ㄦ埛u鍜岀墿鍝乿锛屽浣曟ā鎷熺敤鎴峰叴瓒e湪KG涓婄殑浼犳挱鍛紵娣卞害铻嶅悎 | 褰撴帹鑽愮郴缁熼亣瑙佺煡璇嗗浘璋?>浣滆€呮彁鍑虹殑鏂规硶灏辨槸灏嗙煡璇嗗浘璋变腑鐨勬瘡涓€涓疄浣?h,r,t)閮界敤鐢ㄦ埛鍘嗗彶鐨勭墿鍝佽繘琛岀浉浼煎害璁$畻锛?/p> 
 <span data-tool=

    v鏄墿鍝佸悜閲忥紝r鏄叧绯伙紝h鏄ご鑺傜偣锛屼笁鑰呰绠楃浉浼煎害锛堝緱鍒颁簡鍥剧墖涓璕h鍚庨潰鐨勭豢鑹叉柟鏍硷級銆傜劧鍚庣敤杩欎釜鏉冮噸瀵硅瀹炰綋涓殑灏捐妭鐐箃鍔犳潈灏卞緱鍒颁簡绗竴璺?鎵╂暎鐨勭粨鏋滐細

    鎵€鏈夎烦鏈€鍚庣殑鐢ㄦ埛鐗瑰緛涓烘墍鏈夎烦鐨勬€诲拰锛岄渶瑕佹敞鎰忕殑鏄紝Ripple Network涓病鏈夊鐢ㄦ埛鐩存帴浣跨敤鍚戦噺杩涜鍒荤敾锛岃€屾槸鐢ㄧ敤鎴风偣鍑昏繃鐨勭墿鍝佺殑鍚戦噺闆嗗悎浣滀负鍏剁壒寰侊紙浠g爜涓篃鍙互鍙娇鐢ㄦ渶鍚庣殑o锛夛細

    瀹為檯涓婃眰鍜屽緱鍒扮殑缁撴灉鍙互瑙嗕负v鍦╱鐨勪竴璺崇浉鍏冲疄浣撲腑鐨勪竴涓搷搴斻€傝杩囩▼鍙互閲嶅鍦╱鐨勪簩璺炽€佷笁璺崇浉鍏冲疄浣撲腑杩涜锛屽姝わ紝v鍦ㄧ煡璇嗗浘璋变笂渚夸互V涓轰腑蹇冮€愬眰鍚戝鎵╂暎銆傛渶鍚庡啀鐢ㄧ敤鎴风壒寰佽绠楀鐗╁搧鐨勭浉浼煎害寰楀埌棰勬祴缁撴灉锛?/p>

    鐒跺悗鏉ョ湅涓€涓嬫ā鍨嬬被鐨勪唬鐮侊細杩欓儴鍒嗙殑浠g爜鍒嗕负锛氭暟鎹甶nput锛屽緱鍒板祵鍏ョ壒寰侊紝渚濇璁$畻姣忎竴璺崇殑缁撴灉骞舵洿鏂帮紙鎸夌収鍏紡渚濇璁$畻锛夛紝棰勬祴銆傛渶鍚庢槸鎹熷け鍑芥暟锛堢敱涓夐儴鍒嗙粍鎴愶級鍜岃缁冦€佹祴璇曞嚱鏁般€?/p>

    class RippleNet(object):
        def __init__(self, args, n_entity, n_relation):
            self._parse_args(args, n_entity, n_relation)
            self._build_inputs()
            self._build_embeddings()
            self._build_model()
            self._build_loss()
            self._build_train()

        def _parse_args(self, args, n_entity, n_relation):
            self.n_entity = n_entity
            self.n_relation = n_relation
            self.dim = args.dim
            self.n_hop = args.n_hop
            self.kge_weight = args.kge_weight
            self.l2_weight = args.l2_weight
            self.lr = args.lr
            self.n_memory = args.n_memory
            self.item_update_mode = args.item_update_mode
            self.using_all_hops = args.using_all_hops

        def _build_inputs(self):
            #杈撳叆鏈塱tems id锛宭abels鍜岀敤鎴锋瘡涓€璺崇殑ripple set璁板綍
            self.items = tf.placeholder(dtype=tf.int32, shape=[None], name="items")
            self.labels = tf.placeholder(dtype=tf.float64, shape=[None], name="labels")
            self.memories_h = []
            self.memories_r = []
            self.memories_t = []

            for hop in range(self.n_hop):#姣忎竴璺崇殑缁撴灉
                self.memories_h.append(
                    tf.placeholder(dtype=tf.int32, shape=[None, self.n_memory], name="memories_h_" + str(hop)))
                self.memories_r.append(
                    tf.placeholder(dtype=tf.int32, shape=[None, self.n_memory], name="memories_r_" + str(hop)))
                self.memories_t.append(
                    tf.placeholder(dtype=tf.int32, shape=[None, self.n_memory], name="memories_t_" + str(hop)))

        def _build_embeddings(self):#寰楀埌宓屽叆
            self.entity_emb_matrix = tf.get_variable(name="entity_emb_matrix", dtype=tf.float64,
                                                     shape=[self.n_entity, self.dim],
                                                     initializer=tf.contrib.layers.xavier_initializer())
            #relation杩炴帴head鍜宼ail鎵€浠ョ淮搴︽槸self.dim*self.dim
            self.relation_emb_matrix = tf.get_variable(name="relation_emb_matrix", dtype=tf.float64,
                                                       shape=[self.n_relation, self.dim, self.dim],
                                                       initializer=tf.contrib.layers.xavier_initializer())

        def _build_model(self):
            # transformation matrix for updating item embeddings at the end of each hop
            # 鏇存柊item宓屽叆鐨勮浆鎹㈢煩闃碉紝杩欎釜涓嶄竴瀹氭槸蹇呰鐨勶紝鍙互浣跨敤鐩存帴鏇挎崲鎴栬€呭姞鍜岀瓥鐣ャ€?/span>
            self.transform_matrix = tf.get_variable(name="transform_matrix", shape=[self.dim, self.dim], dtype=tf.float64,
                                                    initializer=tf.contrib.layers.xavier_initializer())

            # [batch size, dim]锛屽緱鍒癷tem鐨勫祵鍏?/span>
            self.item_embeddings = tf.nn.embedding_lookup(self.entity_emb_matrix, self.items)

            self.h_emb_list = []
            self.r_emb_list = []
            self.t_emb_list = []
            for i in range(self.n_hop):#寰楀埌姣忎竴璺崇殑瀹炰綋锛屽叧绯诲祵鍏ist
                # [batch size, n_memory, dim]
                self.h_emb_list.append(tf.nn.embedding_lookup(self.entity_emb_matrix, self.memories_h[i]))

                # [batch size, n_memory, dim, dim]
                self.r_emb_list.append(tf.nn.embedding_lookup(self.relation_emb_matrix, self.memories_r[i]))

                # [batch size, n_memory, dim]
                self.t_emb_list.append(tf.nn.embedding_lookup(self.entity_emb_matrix, self.memories_t[i]))

            #鎸夊叕寮忚绠楁瘡涓€璺崇殑缁撴灉
            o_list = self._key_addressing()

            #寰楀埌鍒嗘暟
            self.scores = tf.squeeze(self.predict(self.item_embeddings, o_list))
            self.scores_normalized = tf.sigmoid(self.scores)

        def _key_addressing(self):#寰楀埌olist
            o_list = []
            for hop in range(self.n_hop):#渚濇璁$畻姣忎竴璺?/span>
                # [batch_size, n_memory, dim, 1]
                h_expanded = tf.expand_dims(self.h_emb_list[hop], axis=3)

                # [batch_size, n_memory, dim]锛岃绠桼h锛屼娇鐢╩atmul鍑芥暟
                Rh = tf.squeeze(tf.matmul(self.r_emb_list[hop], h_expanded), axis=3)

                # [batch_size, dim, 1]
                v = tf.expand_dims(self.item_embeddings, axis=2)

                # [batch_size, n_memory]锛岀劧鍚庡拰v鍐呯Н璁$畻鐩镐技搴?/span>
                probs = tf.squeeze(tf.matmul(Rh, v), axis=2)

                # [batch_size, n_memory]锛宻oftmax杈撳嚭鍒嗘暟
                probs_normalized = tf.nn.softmax(probs)

                # [batch_size, n_memory, 1]
                probs_expanded = tf.expand_dims(probs_normalized, axis=2)

                # [batch_size, dim]锛岀劧鍚庡垎閰嶅垎鏁扮粰灏捐妭鐐瑰緱鍒皁
                o = tf.reduce_sum(self.t_emb_list[hop] * probs_expanded, axis=1)

                #鏇存柊Embedding琛紝骞朵笖瀛樺ソo
                self.item_embeddings = self.update_item_embedding(self.item_embeddings, o)
                o_list.append(o)
            return o_list

        def update_item_embedding(self, item_embeddings, o):
            #璁$畻瀹宧op涔嬪悗锛屾洿鏂癷tem鐨凟mbedding鎿嶄綔锛屽彲浠ユ湁澶氱绛栫暐
            if self.item_update_mode == "replace":#鐩存帴鎹?/span>
                item_embeddings = o
            elif self.item_update_mode == "plus":#鍔犲埌涓€璧?/span>
                item_embeddings = item_embeddings + o
            elif self.item_update_mode == "replace_transform":#鐢ㄥ墠闈㈢殑杞崲鐭╅樀
                item_embeddings = tf.matmul(o, self.transform_matrix)
            elif self.item_update_mode == "plus_transform":#鐢ㄧ煩闃佃€屼笖鍐嶅姞鍒颁竴璧?/span>
                item_embeddings = tf.matmul(item_embeddings + o, self.transform_matrix)
            else:
                raise Exception("Unknown item updating mode: " + self.item_update_mode)
            return item_embeddings

        def predict(self, item_embeddings, o_list):
            y = o_list[-1]#1鍙敤olist鐨勬渶鍚庝竴涓悜閲?/span>
            if self.using_all_hops:#2鎴栬€呬娇鐢ㄦ墍鏈夊悜閲忕殑鐩稿姞鏉ヤ唬琛╱ser
                for i in range(self.n_hop - 1):
                    y += o_list[i]

            # [batch_size]锛寀ser鍜宨tem绠楀唴绉緱鍒伴娴嬪€?/span>
            scores = tf.reduce_sum(item_embeddings * y, axis=1)
            return scores

        def _build_loss(self):#鎹熷け鍑芥暟鏈変笁閮ㄥ垎
            #1鐢ㄤ簬鎺ㄨ崘鐨勫鏁版崯澶卞嚱鏁?/span>
            self.base_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=self.labels, logits=self.scores))

            #2鐭ヨ瘑鍥捐氨琛ㄧず鐨勬崯澶卞嚱鏁?/span>
            self.kge_loss = 0
            for hop in range(self.n_hop):
                h_expanded = tf.expand_dims(self.h_emb_list[hop], axis=2)
                t_expanded = tf.expand_dims(self.t_emb_list[hop], axis=3)
                hRt = tf.squeeze(tf.matmul(tf.matmul(h_expanded, self.r_emb_list[hop]), t_expanded))
                self.kge_loss += tf.reduce_mean(tf.sigmoid(hRt))#涓篽Rt鐨勮〃绀烘槸鍚﹀緱褰?/span>
            self.kge_loss = -self.kge_weight * self.kge_loss

            #3姝e垯鍖栨崯澶?/span>
            self.l2_loss = 0
            for hop in range(self.n_hop):
                self.l2_loss += tf.reduce_mean(tf.reduce_sum(self.h_emb_list[hop] * self.h_emb_list[hop]))
                self.l2_loss += tf.reduce_mean(tf.reduce_sum(self.t_emb_list[hop] * self.t_emb_list[hop]))
                self.l2_loss += tf.reduce_mean(tf.reduce_sum(self.r_emb_list[hop] * self.r_emb_list[hop]))
                if self.item_update_mode == "replace nonlinear" or self.item_update_mode == "plus nonlinear":
                    self.l2_loss += tf.nn.l2_loss(self.transform_matrix)
            self.l2_loss = self.l2_weight * self.l2_loss

            self.loss = self.base_loss + self.kge_loss + self.l2_loss #涓夎€呯浉鍔?/span>

        def _build_train(self):#浣跨敤adam浼樺寲
            self.optimizer = tf.train.AdamOptimizer(self.lr).minimize(self.loss)
            '''
            optimizer = tf.train.AdamOptimizer(self.lr)
            gradients, variables = zip(*optimizer.compute_gradients(self.loss))
            gradients = [None if gradient is None else tf.clip_by_norm(gradient, clip_norm=5)
                         for gradient in gradients]
            self.optimizer = optimizer.apply_gradients(zip(gradients, variables))
            '''


        def train(self, sess, feed_dict):#寮€濮嬭缁?/span>
            return sess.run([self.optimizer, self.loss], feed_dict)

        def eval(self, sess, feed_dict):#寮€濮嬫祴璇?/span>
            labels, scores = sess.run([self.labels, self.scores_normalized], feed_dict)
            #璁$畻auc鍜宎cc
            auc = roc_auc_score(y_true=labels, y_score=scores)
            predictions = [1 if i >= 0.5 else 0 for i in scores]
            acc = np.mean(np.equal(predictions, labels))
            return auc, acc

    瀹屾暣鐨勯€愯涓枃娉ㄩ噴绗旇鍦細https://github.com/nakaizura/Source-Code-Notebook/tree/master/RippleNet

    鍏充簬澶氳烦鐨勫疄鐜?/span>

    鍗氫富鍦ㄨ鏂囩珷鐨勬椂鍊欙紝濮嬬粓涓嶆槑鐧藉璺虫槸鎬庝箞瀹炵幇鐨勶紝涓嬮潰鎴戜滑鐪嬬湅浠g爜鏄€庝箞鍐欙細

    #ripple澶氳烦鏃讹紝姣忚烦鐨勭粨鏋滈泦
    def get_ripple_set(args, kg, user_history_dict):
        print('constructing ripple set ...')

        # user -> [(hop_0_heads, hop_0_relations, hop_0_tails), (hop_1_heads, hop_1_relations, hop_1_tails), ...]
        ripple_set = collections.defaultdict(list)

        for user in user_history_dict:#瀵逛簬姣忎釜鐢ㄦ埛
            for h in range(args.n_hop):#璇ョ敤鎴风殑鍏磋叮鍦↘G澶氳烦hop涓?/span>
                memories_h = []
                memories_r = []
                memories_t = []

                if h == 0:#濡傛灉涓嶄紶鎾紝涓婁竴璺崇殑缁撴灉灏辩洿鎺ユ槸璇ョ敤鎴风殑鍘嗗彶璁板綍
                    tails_of_last_hop = user_history_dict[user]
                else:#鍘婚櫎涓婁竴璺崇殑璁板綍
                    tails_of_last_hop = ripple_set[user][-1][2]

                #鍘婚櫎涓婁竴璺崇殑涓夊厓缁勭壒寰?/span>
                for entity in tails_of_last_hop:
                    for tail_and_relation in kg[entity]:
                        memories_h.append(entity)
                        memories_r.append(tail_and_relation[1])
                        memories_t.append(tail_and_relation[0])

                # if the current ripple set of the given user is empty, we simply copy the ripple set of the last hop here
                # this won't happen for h = 0, because only the items that appear in the KG have been selected
                # this only happens on 154 users in Book-Crossing dataset (since both BX dataset and the KG are sparse)
                if len(memories_h) == 0:
                    ripple_set[user].append(ripple_set[user][-1])
                else:
                    #涓烘瘡涓敤鎴烽噰鏍峰浐瀹氬ぇ灏忕殑閭诲眳
                    replace = len(memories_h) < args.n_memory
                    indices = np.random.choice(len(memories_h), size=args.n_memory, replace=replace)
                    memories_h = [memories_h[i] for i in indices]
                    memories_r = [memories_r[i] for i in indices]
                    memories_t = [memories_t[i] for i in indices]
                    ripple_set[user].append((memories_h, memories_r, memories_t))

        return ripple_set

    鍗冲垱閫犱簡涓€涓猺ipple_set锛岃繖涓猻et鐩稿綋浜庡氨寰楀埌鏁翠釜澶氳烦搴旇璁块棶鍒扮殑鑺傜偣锛屽湪姣忎竴璺抽噷闈㈤兘浼氫负姣忎釜鐢ㄦ埛閲囨牱鍥哄畾澶у皬鐨勯偦灞咃紝鐒跺悗瀛樺鍒皊et涓€傛墍浠ュ湪model妯″瀷鐨勯儴鍒嗭紝鍙互鐩存帴閬嶅巻澶氳烦璁$畻銆?/p>

    涓€璧蜂氦娴?/span>

    鎯冲拰浣犱竴璧峰涔犺繘姝ワ紒鎴戜滑鏂板缓绔嬩簡銆庢帹鑽愮郴缁熴€佺煡璇嗗浘璋便€佸浘绁炵粡缃戠粶銆忕瓑 涓撻璁ㄨ缁勶紝娆㈣繋鎰熷叴瓒g殑鍚屽鍔犲叆涓€璧蜂氦娴併€備负闃叉灏忓箍鍛婇€犳垚淇℃伅楠氭壈锛岄夯鐑︽坊鍔犳垜鐨勫井淇★紝鎵嬪姩閭€璇蜂綘锛堥夯鐑﹀娉ㄥ枖锛?/span>

    娣卞害铻嶅悎 | 褰撴帹鑽愮郴缁熼亣瑙佺煡璇嗗浘璋?></span></p> 
</section> 
<p><br mpa-from-tpl=

    - END -

    娣卞害铻嶅悎 | 褰撴帹鑽愮郴缁熼亣瑙佺煡璇嗗浘璋? class=





    以上是关于的主要内容,如果未能解决你的问题,请参考以下文章

    谷歌浏览器调试jsp 引入代码片段,如何调试代码片段中的js

    片段和活动之间的核心区别是啥?哪些代码可以写成片段?

    VSCode自定义代码片段——.vue文件的模板

    VSCode自定义代码片段6——CSS选择器

    VSCode自定义代码片段——声明函数

    VSCode自定义代码片段8——声明函数