节点文献

采用轮廓特征匹配的红外-可见光视频自动配准

Infrared-visible video automatic registration with contour feature matching

  • 推荐 CAJ下载
  • PDF下载
  • 不支持迅雷等下载工具,请取消加速工具后下载。

【作者】 孙兴龙韩广良郭立红刘培勋许廷发

【Author】 SUN Xing-long;HAN Guang-liang;GUO Li-hong;LIU Pei-xun;XU Ting-fa;Changchun Institute of Optics,Fine Mechanics and Physics,Chinese Academy of Sciences;University of Chinese Academy of Sciences;School of Optoelectronics,Beijing Institute of Technology;

【通讯作者】 韩广良;

【机构】 中国科学院长春光学精密机械与物理研究所中国科学院大学北京理工大学光电学院

【摘要】 为了精确地配准近平面场景下的红外-可见光视频序列,本文提出了一种基于轮廓特征匹配的自动配准方法,通过迭代匹配目标轮廓特征来解决异源图像中配准特征的提取和匹配难题。首先,采用运动目标检测技术获取目标轮廓,并由曲率尺度空间(CSS)角点检测算法提取轮廓特征点。此后,建立全局形状上下文描述子和局部边缘方向直方图描述子描述特征,从而实现可靠的特征匹配。来自不同时刻的匹配点对被保存在一个基于高斯距离准则的特征匹配库中。最后,为了克服近平面场景中目标深度变化的影响,本文结合前景样本随机抽样策略计算配准矩阵的损失函数,完成对全局配准矩阵的更新。在LITIV数据库上对方法进行实验验证,结果表明:本文方法的配准精度优于当前先进的对比方法,在9个测试视频上的平均重叠率误差仅为0.194,与对比方法相比下降了18.5%。基本满足了近平面场景下红外-可见光视频序列配准的精度要求,且具有较高的鲁棒性。

【Abstract】 To register infrared-visible video sequences precisely inalmost-planar scenes, an automatic registration method based on matching the contour features was proposed in this paper. This method could solve the challenging problem regarding extracting and matching features in multimodal images by iteratively matching the contour features of targets. First, this method adopted the technology of moving target detection to identify the contours of targets and extracted the contour feature points with the corner detection algorithm of Curvature Scale Space(CSS). Then, the global shape context descriptors and the local histogram of edge orientation descriptors were established to describe the features;theseareuseful forreliable feature matching. The matched feature pairs from different times were reserved in a reservoir based on the Gaussian distance criterion. Finally, to overcome the influence of target depth variationin almost-planar scenes, the loss function of the registration matrix was calculated by incorporating the strategy of randomly sampling foreground samples, after which the global registration matrix was updated. The method was validated using the LITIV dataset, and the results demonstrate that the proposed method outperforms state-of-the-art methods. The average overlap error of our method on nine test sequences is only 0.194;this value for the compared methods demonstrate a decrease of 18.5%. This essentially satisfies the precise requirement of infrared-visible video registration in almost-planar scenes, and this method is fairly robust.

【基金】 国家自然科学基金资助项目(No.61602432)
  • 【文献出处】 光学精密工程 ,Optics and Precision Engineering , 编辑部邮箱 ,2020年05期
  • 【分类号】TP391.41
  • 【被引频次】9
  • 【下载频次】343
节点文献中: 

本文链接的文献网络图示:

本文的引文网络