节点文献
大语言模型评估技术研究进展
Research Progress in Evaluation Techniques for Large Language Models
【摘要】 随着大语言模型的广泛应用,针对大语言模型的评估工作变得至关重要。除了大语言模型在下游任务上的表现情况需要评估外,其存在的一些潜在风险更需要评估,例如大语言模型可能违背人类的价值观并且被恶意输入诱导引发安全问题等。本文通过分析传统软件、深度学习模型与大模型的共性与差异,借鉴传统软件测评和深度学习模型评估的指标体系,从大语言模型功能评估、性能评估、对齐评估和安全性评估几个维度对现有工作进行总结,并对大模型的评测基准进行介绍。最后依据现有研究与潜在的机遇和挑战,对大语言模型评估技术方向和发展前景进行了展望。
【Abstract】 With the widespread application of large language models, the evaluation of large language models has become crucial. In addition to the performance of large language models in downstream tasks, some potential risks should also be evaluated, such as the possibility that large language models may violate human values and be induced by malicious input to trigger security issues. This paper analyzes the commonalities and differences between traditional software, deep learning systems, and large model systems. It summarizes the existing work from the dimensions of functional evaluation, performance evaluation, alignment evaluation, and security evaluation of large language models, and introduces the evaluation criteria for large models. Finally, based on existing research and potential opportunities and challenges, the direction and development prospects of large language models evaluation technology are discussed.
【Key words】 large language models; functional evaluation; performance evaluation; alignment evaluation; security evaluation;
- 【文献出处】 数据采集与处理 ,Journal of Data Acquisition and Processing , 编辑部邮箱 ,2024年03期
- 【分类号】TP391.1
- 【下载频次】213