节点文献

图注意力自编码器

Graph Attention Auto Encoder

  • 推荐 CAJ下载
  • PDF下载
  • 不支持迅雷等下载工具,请取消加速工具后下载。

【作者】 谢成心侯冀超陈威温秀梅

【Author】 XIE Cheng-xin;HOU Ji-chao;CHEN Wei;WEN Xiu-mei;Hebei Institute of Architecture and Civil Engineering;Big Data Technology Innovation Center of Zhangjiakou;

【通讯作者】 温秀梅;

【机构】 河北建筑工程学院张家口市大数据技术创新中心

【摘要】 自编码器已然成为无监督学习的一个成功框架。但传统的自编码器无法利用图结构数据中的存在关系。图自编码器忽略了重构图的结构以及节点特征。针对图自编码器存在的问题Amin Salehi等人[9]在图自编码器上加入了注意力机制,通过堆叠自我注意力机制的编码器或解码器层来重构输入的图结构以及节点特征。每层通过注意力机制获取邻居节点的特征来生成节点的嵌入表示。最后解码器在反转编码过程来重构节点特征。在Cora数据集上通过调参将原始的基于归纳式学习的图注意力自编码器分类准确率从82.5%提升到83.4%,准确率提升0.9%,该模型是基于归纳式学习能够适用于在其他未见过的图结构。

【Abstract】 Autoencoders have emerged as a successful framework for unsupervised learning.But traditional autoencoders cannot take advantage of the existing relationships in graph-structured data.The graph autoencoder ignores the structure of the reconstructed graph as well as the node features.Amin Salehi et al.[9]added an attention mechanism to the graph autoencoders and reconstructed the input graph structure and node features by stacking the encoder or decoder layers of the self-attention mechanism.Each layer obtains the features of neighboring nodes through an attention mechanism to generate an embedded representation of the node.Finally, the decoder reverses the encoding process to reconstruct node features.In this paper, the classification accuracy of the original inductive learning-based graph attention autoencoder is increased from 82.5% to 83.4% by adjusting parameters on the Cora dataset, and the accuracy is increased by 0.9%.The model is based on inductive learning and can be applied to other unseen graph structures.

【基金】 2021年度河北省高校基本科研业务费项目(2021QNJS04)
  • 【文献出处】 河北建筑工程学院学报 ,Journal of Hebei Institute of Architecture and Civil Engineering , 编辑部邮箱 ,2022年04期
  • 【分类号】TP183
  • 【下载频次】29
节点文献中: 

本文链接的文献网络图示:

本文的引文网络