节点文献
基于自监督对比学习的医学图像模型预训练方法
Pretraining Method of Medical Image Model Based on Self-Supervised Contrast Learning
【作者】 刘世峰;
【导师】 王欣;
【作者基本信息】 吉林大学 , 计算机应用技术, 2022, 硕士
【摘要】 基于卷积神经网络的深度学习方法出现以后,自动化的医学图像处理技术取得了很大的进步。在主流的监督学习方法下,神经网络的训练依赖大规模的标记数据。由于需要大量的劳动力和专业知识,以及受医学中各种具体的应用场景限制,医学图像的标记成本更高。因此很难通过增加标记数据量的方式来提高网络模型的性能。同时,许多医学图像数据集中包含很多未标记的原始图像,这些冗余图像并不能被网络模型所使用。自监督学习的方法可以从无标记的数据集中学习特征表示,进而提高下游任务的准确率。因此,自监督学习在解决这一问题上具有很大的潜力,可以在不增加标记数据量的情况下,提高医学图像模型的性能,并能充分利用冗余的未标记数据。本文围绕基于对比学习的自监督预训练方法以及在网络模型上的应用展开研究,主要的研究内容和贡献如下:1)提出了一种基于对比学习的自监督预训练方法,用于提高医学图像模型的性能。使用基于卷积神经网络的特征提取方法,并将其与对比学习的训练范式进行结合,形成一种自监督学习的预训练方法,用以学习图像数据的内部特征。在预训练阶段从无标记的医学图像中学习潜在的特征表示,让模型在高维向量空间中学习到数据映射间的关系。提出了密集特征的对比损失计算方法,使模型可以学习到更多的数据内部信息,提高了自监督学习的效果。在下游微调任务中,网络模型加载预训练后的参数,使用标记数据对网络模型进行再次训练。实验结果表明,该方法在不增加已有数据量的情况下,提升了模型的性能。2)将基于对比学习的自监督预训练方法应用于医学图像分割之中,使用提出的自监督学习预训练方法与网络模型结合,验证对分割任务的提升效果。同时提出了基于注意力结构的医学图像分割网络,网络通过注意力模块分别在空间和通道两个维度进行注意力映射的推导,然后将注意映射与输入特征映射相乘,并实现自适应的特征细化,突出对分割目标有用的显著特征。实验结果表明,注意力结构可以提高网络提取特征的能力,提高图像分割的精度,经过自监督对比学习的预训练后,网络模型的性能得到了再次提升。
【Abstract】 Automated medical image processing technology has made significant progress after the emergence of deep learning based on the convolutional neural network.Under mainstream supervised learning,neural network training relies on large-scale marker data.Due to the need for a large amount of labor and professional knowledge,as well as the limitation of various specific application scenarios in medicine,the cost of medical image labeling is higher.Therefore,it is difficult to improve the performance of the network model by increasing the amount of marker data.At the same time,many medical image data sets contain many unlabeled original images,and the network model cannot use these redundant images.The self-supervised learning method learns feature representation from unlabeled data sets in the pre-training stage to improve the accuracy of downstream tasks.Self-supervised learning has great potential in solving this problem,which can improve the performance of medical image models without increasing the amount of labeled data and making full use of redundant unlabeled data.This paper focuses on the self-supervised pre-training method based on contrastive learning and its application in the network model.The main research contents and contributions are as follows:1)A self-supervised pre-training method based on contrast learning is proposed to improve the performance of medical image models.The feature extraction method based on the convolutional neural network is combined with the training paradigm of contrast learning to form a self-supervised learning pre-training method,which is used to learn prior knowledge of image data.In the pre-training stage,the potential feature representation is learned from unlabeled medical images.The contrast loss function learns the relationship between data mappings in the high-dimensional vector space.The method relies on two neural networks,called the online network and the target network,with inputs of different enhanced views of the same image,allowing the online network to predict the outcome of the target network’s output.Meanwhile,the weight parameters of the online network are used to update the target network slowly.In the downstream fine-tuning task,the network model loads the parameters of the online network and trains the network model again using marker data.Experimental results show that the proposed method improves the performance of the model without increasing the amount of existing data and verifies the effectiveness of self-supervised learning pre-training.2)A self-supervised pre-training method based on contrast learning is applied to medical image segmentation.A medical image segmentation network based on attention structure is proposed.The network deduces the attention map in spatial and channel dimensions through the attention block and then multiplies the attention map with the input feature map.In addition,adaptive feature refinement is implemented to highlight the significant features useful to the segmentation target.The proposed selfsupervised learning method is combined with the network model.The experimental results show that the attention structure can improve the feature extraction ability of the network and improve the accuracy of image segmentation.After the pre-training of selfsupervised contrast learning,the performance of the network model is improved.
【Key words】 Contrastive Learning; Self-supervised Learning; Pre-training method; Attention mechanism; Medical image processing;