搜索资源列表
classifiers
- 数据挖掘classifiers算法,用JAVA实现的分类算法。
Sentiment-Symposium-Tutorial--Classifiers
- source code in Java for naive bayes cobalah ayolah
weka_code_analysis--ID3
- 适用于正在进行机器学习研究的用户,了解归纳算法的内部运作,了解如何充分利用Weka的类的层级结构,从而满足用户的需要。weka.classifiers.trees.Id3作为一个例子讨论,该方案实现了第4.3节中的ID3决策树学习器-Weka code analysis
sspace-1.5-SNAPSHOT.tar
- all text classifiers
jBNC
- jBNC is a Java toolkit for training, testing, and applying Bayesian Network Classifiers. Implemented classifiers have been shown to perform well in a variety of artificial intelligence, machine learning, and data mining applications. jBNC
Algorithms-of-the-intelligent-web
- 智能web算法英文原版, 描述利用各种技术对web应用进行智能化的处理; 涵盖五类重要算法:搜索、推荐、聚类、分类和分类器融合-This is the English original version of <Algorithms of the Intelligent Web>. It is about the use of techniques that enable the intelligent processing of information.This
TriTrain
- 周志华提出的Tri-Training源代码,利用三个分类器的半监督学习方法,Java版本-Zhou Zhihua proposed by the Tri-Training source code, use semi-supervised learning method, the Java version of the three classifiers
AdaBoost
- 基于决策树实现的AdaBoost分类器,是一个由多个弱分类器进行训练生成强分类的过程。-Based on AdaBoost classifier tree implementation, is a plurality of weak classifiers in the training process generates a strong classifier.
POSTag
- This software recognizes the isolate digits. It is possible to recognize digits 0-9. Particular digits’ templates are stored in folder “wav”. It is possible to change or extend these templates which is recommended especially in case the user is about
classifiers
- 本程序是基于贝叶斯理论的朴素贝叶斯分类器,在weka和eclipse环境下使用-This procedure is a simple classifier based on the theory of Bayes, used in Weka and eclipse environment
dlib-18.14.tar
- 机器学习的范畴,包括SVMs (based on libsvm), k-NN, random forests, decision trees。可以对任意的数据操作-Its focus is on supervised classification with several classifiers available: SVMs (based on libsvm), k-NN, random forests, decision trees. It also performs feature sel
RandomForest
- 随机森林是由多棵树组成的分类或回归方法。主要思想来源于Bagging算法,Bagging技术思想主要是给定一弱分类器及训练集,让该学习算法训练多轮,每轮的训练集由原始训练集中有放回的随机抽取,大小一般跟原始训练集相当,这样依次训练多个弱分类器,最终的分类由这些弱分类器组合,对于分类问题一般采用多数投票法,对于回归问题一般采用简单平均法。随机森林在bagging的基础上,每个弱分类器都是决策树,决策树的生成过程中中,在属性的选择上增加了依一定概率选择属性,在这些属性中选择最佳属性及分割点,传统做法