节点文献
一种计算矩阵特征值特征向量的神经网络方法
A Neural Network Algorithm for Computing Matrix Eigenvalues and Eigenvectors
【摘要】 当把Oja学习规则描述的连续型全反馈神经网络(Oja-N)用于求解矩阵特征值特征向量时,网络初始向量需位于单位超球面上,这给应用带来不便.由此,提出一种求解矩阵特征值特征向量的神经网络(lyNN)方法.在lyNN解析解基础上得到了以下结果:初始向量属于任意特征值对应特征向量张成的子空间,则网络平衡向量也将属于该空间;分析了lyNN收敛于矩阵最大特征值对应特征向量的初始向量取值条件;明确了lyNN收敛于矩阵不同特征值的特征子空间时,网络初始向量的最大取值空间;网络初始向量与已知特征向量垂直,则lyNN平衡解向量将垂直于该特征向量;证明了平衡解向量位于由非零初始向量确定的超球面上的结论.基于以上分析,设计了用lyNN求矩阵特征值特征向量的具体算法,实例演算验证了该算法的有效性.lyNN不出现有限溢,而基于Oja-N的方法在矩阵负定、初始向量位于单位超球面外时必出现有限溢,算法失效.与基于优化的方法相比,lyNN实现容易,计算量较小.
【Abstract】 While using continuous time neural network described by the E.Oja. learning rule (Oja-N) for computing real symmetrical matrix eigenvalues and eigenvectors, the initial vector must be on Rn unit hyper-sphere surface, otherwise, the network may produce limit-time overflow. In order to get over this defect, a new neural network (lyNN) algorithm is proposed. By using the analytic solution of the differential equation of lyNN, the following results are received: If initial vector belongs to a space corresponding to certain eigenvector, the lyNN equilibrium vector will converge in this space; If initial vector does not fall into the space corresponding to any eigenvector, the equilibrium vector will belong to the space spanned by eigenvectors corresponding to the maximum eigenvalue. The initial vector maximum space for the lyNN equilibrium vector will fall into space spanned by eigenvectors corresponding to any eigenvalue received. If the initial vector is perpendicular to a known eigenvector, so is the equilibrium vector. The equilibrium vector is on the hyper-sphere surface decided by the initial vector. By using the above results, a method for computing real symmetric matrix eigenvalues and eigenvectors using lyNN is proposed, the validity of this algorithm is exhibited by two examples, indicating that this algorithm does not bring about limit-time overflow. But for Oja-N, if the initial vector is outside the unit hyper-sphere and the matrix is negatively determinant, the neural network will consequentially produce limit-time overflow. Compared with other algorithms based on optimization, lyNN can be realized directly and its computing weight is lighter.
【Key words】 neural network; symmetric matrix; eigenvalue; eigenvector; limit-time overflow;
- 【文献出处】 软件学报 ,Journal of Software , 编辑部邮箱 ,2005年06期
- 【分类号】TP183
- 【被引频次】8
- 【下载频次】1204