节点文献
深度视觉手势识别协作机械臂物体抓取系统
Cooperative robot arm object grab system of depth visual gesture recognition
【摘要】 为了提高手势识别的准确率以及机械臂抓取过程中控制手爪的夹持方向,实现对机械臂的灵活控制,研发基于深度视觉的手势识别模块。本文采用深度学习模型YOLOv4对手势识别进行研究。针对YOLOv4模型存在的输出内容单一,无法控制手爪抓取过程中夹持方向的问题,对YOLOv4算法的模型进行改进,将3个特征层的输出(32,24,75),(16,12,75),(8,6,75)增加到(32,24,87),(16,12,87),(8,6,87),增加2个手势关键点信息(x,y)的输出,通过这2个手势关键点的位置信息判断手势的水平方向,实现抓取过程中控制手爪的朝向,从而实现机械臂的灵活控制。实验表明,通过改进YOLOv4算法可以实现实时手势图像的精确识别以及机械臂的灵活控制。
【Abstract】 In order to improve the accuracy of gesture recognition and control the grasping direction of the gripper during the grasping process of the manipulator, a gesture recognition module based on depth vision is developed to realize the flexible control of the manipulator.In this paper, the deep learning model YOLOv4 is used to study gesture recognition. In view of the problem that the YOLOv4 model has a single output content and can’t control the clamping direction in the process of grasping, the YOLOv4 algorithm model is improved by increasing the outputs of three feature layers(32,24,75),(16,12,75) and(8,6,75) to(32,24,87),(16,12,87)and(8,6,87). The horizontal direction of the gesture is judged by the position information of the two key points of the gesture, and the orientation of the gripper is controlled in the grabbing process, so as to realize the flexible control of the manipulator. Experiments show that the improved YOLOv4 algorithm can realize the accurate recognition of real-time gesture images and the flexible control of the manipulator.
- 【文献出处】 智能计算机与应用 ,Intelligent Computer and Applications , 编辑部邮箱 ,2023年11期
- 【分类号】TP241;TP391.41
- 【下载频次】52