节点文献
α(?)1-β(?)2稀疏正则化求解非线性反问题
α(?)1-β(?)2 Sparse Regularization for Nonlinear Inverse Problems
【作者】 赵辉;
【导师】 丁亮;
【作者基本信息】 东北林业大学 , 应用数学, 2020, 硕士
【摘要】 近年来,稀疏正则化由于其广泛的应用背景,受到了越来越多的关注。经典的e1稀疏正则化方法通常情况下不能给出最稀疏的解,因此非凸稀疏正则化理论成为众多学者研究的热点。目前,关于线性不适定反问题的非凸稀疏正则化方法,在正则化理论及数值方法方面都已经有了一定的研究。但是非线性不适定反问题的非凸稀疏正则化研究却鲜有相关结论,在正则化理论分析及数值方法中还有很多尚未解决的问题。本文针对非线性不适定反问题,提出具有α(?)1-β(?)2,α>β≥0型罚项的非凸稀疏正则化方法,并将该稀疏正则化用于求解非线性压缩感知问题。本文分析α(?)1-β(?)2型罚项的强制性、弱下半连续性及Radon-Riesz性质,并在此基础上讨论了非凸稀疏正则化解的适定性,即正则解的存在性、稳定性和收敛性。在适当的源条件及非线性条件下,给出了O(δ1/2)阶的正则解收敛速度。由于算子方程是非线性的,且罚项是非凸、非光滑的,一般的梯度方法无法直接应用于α(?)1-β(?)2正则化泛函。因此本文利用广义条件梯度算法,将原非凸稀疏正则化泛函转化为型如F(x)+Φ(x)的结构,分析模拟项F(x)和罚项Φ(x)的凸性及光滑性,构造出一种适用于非凸稀疏正则化的α(?)1-β(?)2型软阈值迭代法(ST-(α(?)1-β(?)2)算法),并给出该算法收敛性的证明。本文构造的算法结构简单,易于实现,其形式与经典的软阈值算法类似。最后,给出该算法在非线性压缩感知中的应用,将文中提出的算法与经典的l1稀疏正则化方法做对比,验证所提出的算法是有效的。
【Abstract】 In recent years,sparse regularization has attracted more and more attention due to its wide application background.The classical l1 sparse regularization methods usually can’t give the most sparse solution,so the theory of non-convex sparse regularization has become a hot topic of many scholars.At present,for the non-convex sparse regularization method of linear ill posed inverse problems,there have been some researches on the regularization theory and numerical methods.However,there are few related conclusions in the study of non-convex sparse regularization of nonlinear ill posed inverse problems,so there are still many problems to be solved in the theoretical analysis and numerical methods of regularization.In this paper,we propose a non-convex sparse regularization method with α(?)1-β(?)2 penalty terms,for the nonlinear ill posed problem,and use this sparse regularization method to solve the nonlinear compressed sensing problem.we analyze the coercivity,weak lower semi-continuity and Radon-Riesz property of α(?)1-β(?)2 penalty terms,and discuss the well posedness of non-convex sparse regularized solutions,that is,the existence,stability and convergence of regularized solutions.The convergence rate of the regularized solution of O(δ1/2)order is given under the appropriate source condition and nonlinear condition.Because the operator equation is nonlinear and the penalty term is non-convex and non-smooth,the general gradient method can’t be directly applied to the α(?)1-β(?)2 regularization function.Therefore,in this paper,the generalized conditional gradient algorithm is used to transform the original nonconvex sparse regularization functional into the structure of type F(x)+Φ(x),the convexity and smoothness of its simulation F(x)and penalty terms Φ(x)are analyzed,a two-step soft threshold algorithm for non-convex sparse regularization is constructed,and the proof of convergence of the algorithm is given.The structure of the algorithm is simple and easy to implement,and its form is similar to the classical soft threshold algorithm.Finally,a numerical example of the algorithm in nonlinear compression sensing is given.The new algorithm proposed in this paper is compared with the classical l1 sparse regularization method to verify the effectiveness of the proposed algorithm.
【Key words】 Nonlinear ill posed problems; Non-convex; Non-smooth; α(?)1-β(?)2 penalty term; Sparse regularization;