节点文献

基于多重视觉注意力的唇语识别

Lipreading Based on Multiple Visual Attention

  • 推荐 CAJ下载
  • PDF下载
  • 不支持迅雷等下载工具,请取消加速工具后下载。

【作者】 谢胤岑薛峰曹明伟

【Author】 XIE Yincen;XUE Feng;CAO Mingwei;School of Computer Science and Information Engineering,Hefei University of Technology;School of Software, Hefei University of Technology;School of Computer Science and Technology, Anhui University;

【通讯作者】 薛峰;

【机构】 合肥工业大学计算机与信息学院合肥工业大学软件学院安徽大学计算机科学与技术学院

【摘要】 唇语识别是将单个说话人嘴唇运动的无声视频翻译成文字的一种技术.由于嘴唇运动幅度较小,现有唇语识别方法的特征区分能力和泛化能力都较差.针对该问题,文中分别从时间、空间和通道三个维度研究唇语视觉特征的提纯问题,提出基于多重视觉注意力的唇语识别方法(Lipreading Based on Multiple Visual Attention Network, LipMVA).首先利用通道注意力自适应校准通道级别的特征,减轻无意义通道的干扰.然后使用两种粒度不同的时空注意力,抑制不重要的像素或帧的影响.CMLR、GRID数据集上的实验表明LipMVA可降低识别错误率,由此验证方法的有效性.

【Abstract】 Lipreading is a technology that translates the silent video of a single speaker′s lip motion into text. Due to the small amplitude of lip movements, the feature differentiation ability and the generalization ability of the model are both weak. To address this issue, the purification of lipreading visual features is studied from three dimensions including time, space and channel. A method for lipreading based on multiple visual attention network(LipMVA) is proposed. Firstly, channel-level features are calibrated adaptively by channel attention to mitigate the interference from meaningless channels. Then, two spatio-temporal attention modules with different granularities are employed to suppress the effect of unimportant pixels or frames. Finally, experiments on CMLR and GRID datasets demonstrate LipMVA can reduce the error rate and therefore its effectiveness is verified.

【基金】 国家自然科学基金项目(No.62272143);安徽省重大科技专项项目(No.202203a05020025);安徽高校协同创新项目(No.GXXT-2022-054);安徽省第七届创新创业人才特殊支持计划资助~~
  • 【文献出处】 模式识别与人工智能 ,Pattern Recognition and Artificial Intelligence , 编辑部邮箱 ,2024年01期
  • 【分类号】TP391.41
  • 【下载频次】60
节点文献中: 

本文链接的文献网络图示:

本文的引文网络