节点文献

大语言模型评估技术研究进展

Research Progress in Evaluation Techniques for Large Language Models

  • 推荐 CAJ下载
  • PDF下载
  • 不支持迅雷等下载工具,请取消加速工具后下载。

【作者】 赵睿卓曲紫畅陈国英王坤龙徐哲炜柯文俊汪鹏

【Author】 ZHAO Ruizhuo;QU Zichang;CHEN Guoying;WANG Kunlong;XU Zhewei;KE Wenjun;WANG Peng;Beijing Computer Technology and Applied Research Institute;School of Computer Science and Engineering, Southeast University;

【通讯作者】 徐哲炜;

【机构】 北京计算机技术及应用研究所东南大学计算机科学与工程学院

【摘要】 随着大语言模型的广泛应用,针对大语言模型的评估工作变得至关重要。除了大语言模型在下游任务上的表现情况需要评估外,其存在的一些潜在风险更需要评估,例如大语言模型可能违背人类的价值观并且被恶意输入诱导引发安全问题等。本文通过分析传统软件、深度学习模型与大模型的共性与差异,借鉴传统软件测评和深度学习模型评估的指标体系,从大语言模型功能评估、性能评估、对齐评估和安全性评估几个维度对现有工作进行总结,并对大模型的评测基准进行介绍。最后依据现有研究与潜在的机遇和挑战,对大语言模型评估技术方向和发展前景进行了展望。

【Abstract】 With the widespread application of large language models, the evaluation of large language models has become crucial. In addition to the performance of large language models in downstream tasks, some potential risks should also be evaluated, such as the possibility that large language models may violate human values and be induced by malicious input to trigger security issues. This paper analyzes the commonalities and differences between traditional software, deep learning systems, and large model systems. It summarizes the existing work from the dimensions of functional evaluation, performance evaluation, alignment evaluation, and security evaluation of large language models, and introduces the evaluation criteria for large models. Finally, based on existing research and potential opportunities and challenges, the direction and development prospects of large language models evaluation technology are discussed.

【基金】 国家自然科学基金(62376057);东南大学启动研究基金(RF1028623234)
  • 【文献出处】 数据采集与处理 ,Journal of Data Acquisition and Processing , 编辑部邮箱 ,2024年03期
  • 【分类号】TP391.1
  • 【下载频次】213
节点文献中: 

本文链接的文献网络图示:

本文的引文网络