数据挖掘分类算法的优缺点总结

旧城冷巷雨未停 提交于 2019-11-26 21:49:34

最近在学习数据挖掘中的分类算法,顺便整理了各种分类算法的优缺点。


决策树

一种启发式算法,核心是在决策树各个节点上应用信息增益等准则来选取特征,进而递归地构造决策树。


优点:

1. 计算复杂度不高,易于理解和解释,可以理解决策树所表达的意义;

2. 数据预处理阶段比较简单,且可以处理缺失数据;

3. 能够同时处理数据型和分类型属性,且可对有许多属性的数据集构造决策树,其他技术往往需要数据属性的单一;

4. 是一个白盒模型,若给定一个观察模型,则根据所产生的决策树很容易推断出相应的逻辑表达式;

5. 在相对短的时间内能够对大数据集合做出可行且效果良好的分类结果。


缺点:

1. 对于那些各类别样本数目不一致的数据,信息增益的结果偏向于那些具有更多数值的属性;

2. 对噪声数据较为敏感;

3. 容易出现过拟合问题;

4. 忽略了数据集中属性之间的相关性。


可以处理的样例数据集:Soybean数据集

diaporthe-stem-canker,6,0,2,1,0,1,1,1,0,0,1,1,0,2,2,0,0,0,1,1,3,1,1,1,0,0,0,0,4,0,0,0,0,0,0
diaporthe-stem-canker,4,0,2,1,0,2,0,2,1,1,1,1,0,2,2,0,0,0,1,0,3,1,1,1,0,0,0,0,4,0,0,0,0,0,0
diaporthe-stem-canker,3,0,2,1,0,1,0,2,1,2,1,1,0,2,2,0,0,0,1,0,3,0,1,1,0,0,0,0,4,0,0,0,0,0,0
diaporthe-stem-canker,4,0,2,1,0,2,0,2,0,2,1,1,0,2,2,0,0,0,1,0,3,1,1,1,0,0,0,0,4,0,0,0,0,0,0
charcoal-rot,6,0,0,2,0,1,3,1,1,0,1,1,0,2,2,0,0,0,1,0,0,3,0,0,0,2,1,0,4,0,0,0,0,0,0
charcoal-rot,4,0,0,1,1,1,3,1,1,1,1,1,0,2,2,0,0,0,1,1,0,3,0,0,0,2,1,0,4,0,0,0,0,0,0
charcoal-rot,3,0,0,1,0,1,2,1,0,0,1,1,0,2,2,0,0,0,1,0,0,3,0,0,0,2,1,0,4,0,0,0,0,0,0
charcoal-rot,5,0,0,2,1,2,2,1,0,2,1,1,0,2,2,0,0,0,1,0,0,3,0,0,0,2,1,0,4,0,0,0,0,0,0
rhizoctonia-root-rot,1,1,2,0,0,2,1,2,0,2,1,0,0,2,2,0,0,0,1,0,1,1,0,1,1,0,0,3,4,0,0,0,0,0,0
rhizoctonia-root-rot,1,1,2,0,0,1,1,2,0,1,1,0,0,2,2,0,0,0,1,0,1,1,0,1,0,0,0,3,4,0,0,0,0,0,0
rhizoctonia-root-rot,2,1,2,0,0,2,1,1,0,1,1,0,0,2,2,0,0,0,1,0,1,1,0,1,0,0,0,3,4,0,0,0,0,0,0
rhizoctonia-root-rot,2,1,2,0,0,1,1,2,0,2,1,0,0,2,2,0,0,0,1,0,1,1,0,1,0,0,0,3,4,0,0,0,0,0,0


KNN算法

一种惰性分类方法,从训练集中找出k个最接近测试对象的训练对象,再从这k个训练对象中找出居于主导的类别,将其赋给测试对象。


优点:

1. 简单有效,容易理解和实现;

2. 重新训练的代价较低(类别体系的变化和训练集的变化);

3. 计算时间和空间线性于训练集的规模; 

4. 错误率渐进收敛于贝叶斯错误率,可作为贝叶斯的近似;

5. 适合处理多模分类和多标签分类问题;

6. 对于类域的交叉或重叠较多的待分类样本集较为适合;


缺点:

1. 是lazy learning方法,比一些积极学习的算法要慢;

2. 计算量比较大,需对样本点进行剪辑;

3. 对于样本不平衡的数据集效果不佳,可采用加权投票法改进;

4. k值的选择对分类效果有很大影响,较小的话对噪声敏感,需估计最佳k值。


可以处理的样例数据集:Iris数据集


4.6,3.2,1.4,0.2,Iris-setosa
5.3,3.7,1.5,0.2,Iris-setosa
5.0,3.3,1.4,0.2,Iris-setosa
7.0,3.2,4.7,1.4,Iris-versicolor
6.4,3.2,4.5,1.5,Iris-versicolor
6.9,3.1,4.9,1.5,Iris-versicolor


朴素贝叶斯算法

贝叶斯分类器的分类原理是利用各个类别的先验概率,再利用贝叶斯公式及独立性假设计算出属性的类别概率以及对象的后验概率,即该对象属于某一类的概率,选择具有最大后验概率的类作为该对象所属的类别。


优点:

1. 数学基础坚实,分类效率稳定,容易解释;

2. 所需估计的参数很少,对缺失数据不太敏感;

3. 无需复杂的迭代求解框架,适用于规模巨大的数据集。

(原因:通常数据集会先执行属性选择过程,提高了属性之间的独立性,且朴素贝叶斯可以产生较为复杂的非线性决策面,可以拟合出相当复杂的曲面)


缺点:

1. 属性之间的独立性假设往往不成立(可考虑用聚类算法先将相关性较大的属性进行聚类);

2. 需要知道先验概率,分类决策存在错误率。


可以处理的样例数据集:Breast Cancer数据集


858477,B,8.618,11.79,54.34,224.5,0.09752,0.05272,0.02061,0.007799,0.1683,0.07187,0.1559,0.5796,1.046,8.322,0.01011,0.01055,0.01981,0.005742,0.0209,0.002788,9.507,15.4,59.9,274.9,0.1733,0.1239,0.1168,0.04419,0.322,0.09026
858970,B,10.17,14.88,64.55,311.9,0.1134,0.08061,0.01084,0.0129,0.2743,0.0696,0.5158,1.441,3.312,34.62,0.007514,0.01099,0.007665,0.008193,0.04183,0.005953,11.02,17.45,69.86,368.6,0.1275,0.09866,0.02168,0.02579,0.3557,0.0802
858981,B,8.598,20.98,54.66,221.8,0.1243,0.08963,0.03,0.009259,0.1828,0.06757,0.3582,2.067,2.493,18.39,0.01193,0.03162,0.03,0.009259,0.03357,0.003048,9.565,27.04,62.06,273.9,0.1639,0.1698,0.09001,0.02778,0.2972,0.07712
858986,M,14.25,22.15,96.42,645.7,0.1049,0.2008,0.2135,0.08653,0.1949,0.07292,0.7036,1.268,5.373,60.78,0.009407,0.07056,0.06899,0.01848,0.017,0.006113,17.67,29.51,119.1,959.5,0.164,0.6247,0.6922,0.1785,0.2844,0.1132
859196,B,9.173,13.86,59.2,260.9,0.07721,0.08751,0.05988,0.0218,0.2341,0.06963,0.4098,2.265,2.608,23.52,0.008738,0.03938,0.04312,0.0156,0.04192,0.005822,10.01,19.23,65.59,310.1,0.09836,0.1678,0.1397,0.05087,0.3282,0.0849
85922302,M,12.68,23.84,82.69,499,0.1122,0.1262,0.1128,0.06873,0.1905,0.0659,0.4255,1.178,2.927,36.46,0.007781,0.02648,0.02973,0.0129,0.01635,0.003601,17.09,33.47,111.8,888.3,0.1851,0.4061,0.4024,0.1716,0.3383,0.1031


SVM算法

对于两类线性可分学习任务,SVM找到一个间隔最大的超平面将两类样本分开,最大间隔能够保证该超平面具有最好的泛化能力。


优点:

1. 可以解决小样本情况下的ML问题;

2. 可以提高泛化性能;

3. 可以解决高维问题,避免维数灾难;

4. 可以解决非线性问题;

5. 可以避免神经网络结构选择和局部极小点问题。

参数C和g的选择对分类性能的影响:

C是惩罚系数,C越大,交叉validation高,容易过学习;g是核函数的到达0的速率,g越小,K函数的系数越小,函数下降快,交叉validation高,也容易造成过学习。


缺点:

1. 对缺失数据敏感;

2. 对非线性问题没有通用解决方案,必须谨慎选择kernel function来处理。


可以处理的样例数据集:SPECTF Heart数据集

1,70,66,66,68,71,69,64,61,68,67,50,53,73,71,73,63,71,73,80,81,82,82,67,71,52,47,67,64,66,67,66,75,58,62,65,65,71,67,70,71,67,64,52
1,73,76,68,74,56,59,73,76,54,48,75,78,47,53,25,19,60,56,56,54,80,79,47,53,19,14,58,50,67,71,63,54,49,48,66,65,62,58,57,72,31,30,15
1,68,76,79,78,63,73,68,78,64,71,73,77,67,71,58,57,61,63,52,64,64,74,53,72,36,44,52,54,49,56,73,81,65,80,53,60,63,70,58,64,52,57,49
1,68,64,65,68,63,64,77,73,75,72,80,77,70,71,61,61,73,68,63,62,76,73,69,69,48,59,62,44,66,59,75,74,64,64,63,61,70,69,74,67,51,48,45
0,62,67,64,70,59,58,67,74,60,66,68,68,73,71,60,63,64,74,64,65,74,77,69,73,59,58,58,67,65,69,78,76,61,62,64,67,72,74,71,71,71,69,66
0,62,67,68,70,65,70,73,77,69,70,69,73,71,74,71,71,76,75,66,67,73,73,70,74,63,67,58,68,66,69,78,79,69,70,71,73,72,71,73,77,72,76,64
0,59,68,69,67,69,59,78,73,66,65,77,73,74,66,66,55,71,66,69,68,75,73,80,79,69,65,69,66,68,65,75,71,59,61,65,64,73,71,81,75,74,65,69
0,75,75,70,77,67,75,75,75,67,66,74,73,68,72,64,70,76,70,67,63,74,75,72,68,69,68,75,69,71,74,75,76,63,70,71,69,66,63,70,73,66,68,58


AdaBoost算法

提升方法是从弱学习算法出发,反复学习,得到一系列的弱分类器(即基本分类器),然后组合这些弱分类器,构成一个强分类器,大多数的提升方法都是改变训练数据集的概率分布(训练数据的权值分布),针对不同的训练数据分布调用弱学习算法学习一系列的弱分类器。

优点:

1. 分类精度高;

2. 可以使用各种方法构建子分类器,Adaboost算法提供的是框架;

3. 简单,且不用做特征筛选;

4. 不会造成overfitting。


缺点:

1. 对分类错误的样本多次被分错而多次加权后,导致权重过大,影响分类器的选择,造成退化问题;(需改进权值更新方式)

2. 数据不平衡问题导致分类精度的急剧下降;

3. 算法训练耗时,拓展困难;

4. 存在过拟合,鲁棒性不强等问题。


Logistic回归算法

二项logistic回归模型是一种分类模型,由条件概率分布P(Y|X)表示,形式为参数化的logistic分布。这里随机变量X取值为实数,随机变量Y取值为1或0。可以通过有监督的方法来估计模型参数。


优点:

1. 计算代价不高,易于理解和实现;

2. 适用于数值型和分类型数据。


缺点:

1. 容易欠拟合;

2. 分类精度可能不高。


人工神经网络

优点:

1. 分类的准确度高,并行分布处理能力强,分布存储及学习能力强;

2. 对噪声神经有较强的鲁棒性和容错能力,能充分逼近复杂的非线性关系,具备联想记忆的功能等。


缺点:

1. 神经网络需要大量的参数,如网络拓扑结构、权值和阈值的初始值;

2. 不能观察之间的学习过程,输出结果难以解释,会影响到结果的可信度和可接受程度;

3. 学习时间过长,甚至可能达不到学习的目的。


遗传算法

优点:

1. 与问题领域无关切快速随机的搜索能力;

2. 搜索从群体出发,具有潜在的并行性,可以进行多个个体的同时比较,鲁棒性好;

3. 搜索使用评价函数启发,过程简单;

4. 使用概率机制进行迭代,具有随机性;

5. 具有可扩展性,容易与其他算法结合。


缺点:

1. 遗传算法的编程实现比较复杂,首先需要对问题进行编码,找到最优解之后还需要对问题进行解码;

2. 三个算子的实现也有许多参数,如交叉率和变异率,并且这些参数的选择严重影响解的品质,而目前这些参数的选择大部分是依靠经验。没有能够及时利用网络的反馈信息,故算法的搜索速度比较慢,要得要较精确的解需要较多的训练时间;

3. 算法对初始种群的选择有一定的依赖性,能够结合一些启发算法进行改进。



如有错误请不吝指出!微笑微笑微笑



易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!