节点文献

基于时空自适应图卷积网络的跌倒检测算法

Fall detection algorithm based on spatial-temporal adaptive graph convolution network

  • 推荐 CAJ下载
  • PDF下载
  • 不支持迅雷等下载工具,请取消加速工具后下载。

【作者】 刘鹏飞李伟彤

【Author】 Liu Pengfei;Li Weitong;School of Information Engineering, Guangdong University of Technology;

【通讯作者】 李伟彤;

【机构】 广东工业大学信息工程学院

【摘要】 针对现有图卷积网络(GCN)需要预先定义人体骨架拓扑图和模型较大的问题,提出了基于时空自适应图卷积网络(ST-AGCN)的跌倒检测算法。该网络包括3个部分:利用HRNet姿态估计算法从视频中提取人体骨架点序列,并预处理成四维张量;引入归一化嵌入式高斯函数通过学习(无需人工预定义)得到人体拓扑图,利用空间自适应图卷积获取人体关联特征;利用多尺度卷积提取时间运动特征,提高模型获取动态信息的能力。在公开数据集和自建数据集上分别进行仿真,准确率分别达95.45%和99.55%。结果表明,该算法优于目前GCN方法,参数量只有后者的1/4甚至更少。本文算法还可以适用于不同的数据集。

【Abstract】 To solve the problem that existing graph convolution network(GCN) need to pre-define human skeleton topology and the model is large, a fall detection algorithm based on spatiotemporal adaptive graph convolutional network(ST-AGCN) is proposed. The network consists of three parts: firstly, HRNet, a human pose estimation algorithm, is used to extract human skeleton points from video and preprocess them into four-dimensional tensor. Secondly, the normalized embedded Gaussian function is introduced to obtain the human body topology by learning(without manual pre-definition), and the human body correlation features are obtained by spatial adaptive graph convolution. Thirdly, multi-scale convolution is used to extract temporal motion features to improve the model′s ability to obtain dynamic information. Simulations are carried out on public and self-built dataset, and the accuracy rates are 95.45% and 99.55%, respectively. The results show that the proposed algorithm is better than the current GCN methods, and the number of parameters is only a quarter of the latter, or even less. Another advantage of our algorithm is that it can be applied to different datasets.

【基金】 广东省科技计划项目(2017A010101016)资助
  • 【文献出处】 电子测量技术 ,Electronic Measurement Technology , 编辑部邮箱 ,2023年03期
  • 【分类号】TP391.41
  • 【下载频次】60
节点文献中: 

本文链接的文献网络图示:

本文的引文网络