节点文献
融合语言模型的端到端中文语音识别算法
An End-to-End Chinese Speech Recognition Algorithm Integrating Language Model
【摘要】 为了解决语音识别模型在识别中文语音时鲁棒性差,缺少语言建模能力而无法有效区分同音字或近音字的不足,本文提出了融合语言模型的端到端中文语音识别算法.算法建立了一个基于深度全序列卷积神经网络和联结时序分类的从语音到拼音的语音识别声学模型,并借鉴Transformer的编码模型,构建了从拼音到汉字的语言模型,之后通过设计语音帧分解模型将声学模型的输出和语言模型的输入相连接,克服了语言模型误差梯度无法传递给声学模型的难点,实现了声学模型和语言模型的联合训练.为验证本文方法,在实际数据集上进行了测试.实验结果表明,语言模型的引入将算法的字错误率降低了21%,端到端的联合训练算法起到了关键作用,其对算法的影响达到了43%.和已有5种主流算法进行比较的结果表明本文方法的误差明显低于其他5种对比模型,与结果最好的DeepSpeech2模型相比字错误率降低了28%.
【Abstract】 To address the problems of poor robustness, lack of language modeling ability and inability to distinguish between homophones or near-tone characters effectively in the recognition of Chinese speech, an end-to-end Chinese speech recognition algorithm integrating language model is proposed. Firstly, an acoustic model from speech to Pinyin is established based on Deep Fully Convolutional Neural Network(DFCNN) and Connectionist Temporal Classification(CTC).Then the language model from Pinyin to Chinese character is constructed by using the encoder of Transformer. Finally, the speech frame decomposition model is designed to link the output of the acoustic model with the input of the language model, which overcomes the difficulty that the gradient of loss function cannot be passed from the language model to the acoustic model, and realizes the end-to-end training of the acoustic model and the language model. Real data sets are applied to verify the proposed method. Experimental results show that the introduction of language model reduces the word error rate(WER) of the algorithm by21%, and the end-to-end integrating training algorithm plays a key role, which improves the performance by43%. Compared with five up-to-date algorithms, our method achieves a28% WER, lower than that of the best model among comparison methods—DeepSpeech2.
【Key words】 speech recognition; CTC; language model; acoustic model; speech frame decomposition;
- 【文献出处】 电子学报 ,Acta Electronica Sinica , 编辑部邮箱 ,2021年11期
- 【分类号】TN912.34
- 【被引频次】9
- 【下载频次】815