节点文献
基于粗糙集的数据挖掘方法研究
Research on Data Mining Methods Based on Rough Set
【作者】 崔广才;
【导师】 刘大有;
【作者基本信息】 吉林大学 , 计算机应用技术, 2004, 博士
【摘要】 近年来,数据挖掘引起了信息产业界的极大关注,其主要原因是存在大量可供使用的数据,并且迫切需要将这些数据转换成有用的信息和知识。粗糙集理论对于人工智能和认知科学是十分重要的,从它一提出来就受到到模糊数学创始人 Zadeh 的重视和高度评价,并将其列入他新提倡的软计算的基础理论之一。将粗糙集应用于数据挖掘领域,能提高对大型数据库中的不完整数据进行分析和学习的能力,具有广泛的应用前景和实用价值。属性约简是粗糙集理论中的一个重要课题。由于大型数据库中常常包含许多对发现规则来讲是冗余的、不必要的属性,研究人员发现,如果能将冗余属性删除,将会大大提高系统潜在知识的清晰度,降低发现规则的时间复杂性,提高发现效率。对于大型数据库中的海量数据,更需要的是增量地更新数据挖掘结果,而不是从每次更新的数据库重新进行挖掘。这种算法渐增地进<WP=121>行知识更新,修正和加强先前业已发现的知识。增量算法是提高学习效率的一个重要算法之一。在数据挖掘中使用增量算法,不仅复杂度较小,而且可以通过增加实例修正已有的规则集。为解决上述问题,本文研究了一些基于粗糙集和遗传算法的数据挖掘方法,主要工作包括:1.研究了数据挖掘的原理和现状,当前的数据挖掘方法已综合了数据库、人工智能、统计学、模式识别、机器学习、数据分析等众多领域的研究成果。本文从数据挖掘和知识分类的角度出发,探讨了数据挖掘的相关概念、工作步骤和关键技术。数据挖掘(DM)是指从大量的原始数据中发现隐含的、未知的、有用的知识的非平凡过程。简单的说,就是从数据到知识的过程。在数据挖掘系统中数据库被分为两部分,一部分是训练集,一部分是测试集。通过使用训练集,进行一个学习过程且获得相应的知识模式。工作步骤主要包括:数据准备、实际的挖掘、规则表述。在此基础上将数据挖掘和知识发现、在线分析等进行了比较,指出数据挖掘是从存放在数据库、数据仓库或其他信息库中的大量数据中挖掘有趣知识的过程。从数据分析的角度来看,OLAP位于较浅的层次,而DM则处于较深的位置。数据挖掘方法主要有:决策树、神经网络、模糊论、遗传算法、贝叶斯网络和粗糙集等方法。通过总结数据挖掘的方法,得出数据挖掘系统的模型。2.深入分析了粗糙集的基本理论、属性约简的基本方法与算法、遗传算法的基本理论。粗糙集理论是一种新的处理模糊和不确定性知识的数学工具。其主要思想就是在保持分类能力不变的前提下,通过知识约简导出问题的决策或分类规则。粗糙集的核心问题是知识的约简和获取,这离不开一系列的算法作支撑,包括求等价关系、求上下近似、判断属性的重要性、求核、属性约简等,其中属性约简是粗糙集用于数据分析的主要手段。约简算法的设计和实现是粗糙集研究的<WP=122>重要内容之一。讨论了知识约简与知识依赖关系,知识表达系统和决策表的关系,探讨和比较了多种属性约简方法的实现原理,给出了各方法的优缺点。求所有约简属性集的问题实质是一个属性组合情况的搜索问题,用一些启发式规则对搜索算法进行引导,将大大降低算法的复杂度。基于属性重要性和频度的启发式约简算法需要计算决策表的核,而当决策表的属性较多时,决策表可能没有核,从而使该算法失去较好的起点。此外由于引入了用户偏好集,使算法执行前可人为指定某些人主观认为比较重要的属性加入约简,使得算法具有较高的准确性和较强的伸缩性。可是用这种方法得出的约简往往包含很多属性,使得所得到的规则前提条件很长。由于引入用户偏好集,算法最后所得的约简可能还有多余属性,并不符合约简的定义。深入研究了遗传算法实现的共性问题,并给出了各类算子的设计和实现方法。本文提出对选择算子的改进,在执行选择算子时,首先对种群个体进行分类,每类中的个体是相同的,然后对每类个体都计算适应值,若有多类个体的适应值同时为最大,则对适应值最大的几个类的个体数目进行修正,使这几类个体在种群中的数目大致相等,然后再用旋转盘算法对种群中的个体进行选择。3.研究了把信息论应用于决策信息系统属性约简的方法,并与遗传算法相结合,改善了原属性约简算法的性能。属性重要性反映了把某一属性加入到核时互信息的增量,本文将从信息论角度定义的属性重要性度量作为启发信息引入遗传算法,从而得到一种用于求解最小约简的启发式遗传算法。遗传算法中修正算子用来对种群进行修复,保证所有个体都是侯选约简,使搜索总在可行解空间上进行。并在保证侯选约简的条件下,尽可能增加个体适应值。用基于信息熵属性重要性度量和基于粗糙集的属性依赖度的加权平均和作为约简算法中的修正操作依据,由于增加了一个修正算子,使所有经过修正的个体都<WP=123>是候选约简,这使得遗传操作作用在可行候选解空间上,节省了计算资源,有效地加快了算法收敛速度。该算法在选择算子中增加了一步操作,当下一代中最差个体比上一代最好个体适应性差时,用上一代最好个体替换下一代最差个体,并适当地调整优良个体的比例,保证了种群的多样性。改进算法结束条件,使之由群体稳定性来决定,保证了遗?
【Abstract】 Data mining attracts great attention in information industry. The major reason is that large amount of existing data may be used widely, and it is urgently necessary to convert these data into useful information and knowledge.Rough set theory is very important for artificial intelligence and cognitive science. It is emphasized and highly appraised by Zadeh, the founder of fuzzy mathematics, since it appeared. And also it is listed into the basic theory of soft computing which Zadeh newly advocated. Applying rough theory in data mining field can improve the analyzing and learning ability for incomplete data of large database, which has extensive applied prospect and applied value. Attribute reduction is a significant topic of rough set theory. Large database usually involves many attributes that are redundant or unnecessary for discovering rules. The researcher found that it could improve potential knowledge definition of system, lower time complexity of discovering the rules, and raise discovering efficiency if the redundant attributes could be eliminated.For mass data given in large database, it is advisable to update the data mining results incrementally, rather than mine from updated results of each time. Incremental algorithm combines with the database updating together,which is not necessary to mine all the <WP=126>data over again. This algorithm updates the knowledge incrementally, modifies and strengthens the discovered knowledge. Incremental algorithm is one of the major algorithms for raising learning efficiency. Applying incremental algorithm into data mining can reduce the complexity and enhance instance-revising rules. Data mining based on rough set and genetic algorithm is investigated in this paper for solving the problems mentioned above:It studies the principles and actuality of data mining, and integrates the research achievements in database, artificial intelligence, statistics, pattern recognition, machine learning, and data analysis .etc fields for data mining. This paper discusses the corresponding concepts, working steps and key technologies on data mining from the point of view of data mining and knowledge classification. Data mining (DM) is a nontrivial process of identifying hidden, undiscovered, and potentially useful knowledge from mass original data. In brief, it is a process transforming data to knowledge. Database is divided into two parts in data mining system, training set and testing set. A learning process is produced and corresponding knowledge model is achieved with training set. The major working steps include: data preparation, practical mining and rule description, based on which compare data-mining with knowledge discovery and online analysis, and indicate that data mining is a process to mine interest knowledge from mass data in databases, data warehouses and other information databases. From the point of view of data analysis, OLAP locates in the shallow layer and DM in the deeper. The main methods of data mining include: decision tree, neural network, fuzzy theory, genetic algorithm and rough set etc. Data mining system model is achieved by the summing-up of data mining methods. It analyzes and investigates thoroughly the essential theory of rough set and genetic algorithm, basic methods and algorithms of attribute reduction. Rough set theory is a new mathematic tool for processing vagueness and uncertainty knowledge. The main thinking is to deduce the decisions or classification rules of problems by knowledge reduction premised on keeping classification ability constantly. The core of rough set is knowledge reduction and discovery, which is supported by a series of algorithms, such as equivalent relation, upper/lower approximation solution, attribute <WP=127>importance estimation, core computation and attribute reduction, etc. Within the algorithms, attribute reduction is the main method for data analysis in rough set, so the design and realization of reduction algorithm is one of the most important contents in rough set research. This paper discusse