搜索资源列表
AdaBoost
- Adaboost是一种迭代算法,其核心思想是针对同一个训练集训练不同的分类器(弱分类器),然后把这些弱分类器集合起来,构成一个更强的最终分类器(强分类器)。-Adaboost is an iterative algorithm, the core idea is the same training set for training different classifiers (weak classifiers), then these weak classifiers together to f
MachineLearning
- 机器学习实战训练,有助于你快速进入学习状态-Machine learning combat training
KNNDemo
- KNN算法Java语言实现,控制台运行界面。分类训练样本集和测试样本都有。-Java KNN language implementation, the console running interface. Classified training samples and test samples are.
perception
- 用R语言进行机器学习中关于简单分类以及perception分类的代码。-some code about the easy classify and perception training in machine learning, using R.
PCA
- 这是基于MATLAB 的神经网络训练算法的源程序,希望对大家有用-This is based on the MATLAB neural network training algorithm source, we hope to be useful
mvrvm_c
- 相关向量机RVM用于预测和分类,含有归一化,训练和测试,可以多变量输出,解决海瑟矩阵问题-RVM relevance vector machine for prediction and classification, containing normalization, training and testing, can be more variable output matrix problem solving Heather
PCA-AND-PNN
- 应用主成分分析对数据降维,将得到的数据用于概率神经网络训练,进行模式识别。对于一组新数据,先计算主成分得分,再输入训练好的概率神经网络,就会得到识别结果,即改组数据属于何种类别。-Principal component analysis of the data reduction, data will be obtained for the probabilistic neural network training, pattern recognition. For a new set of d
NaiveBayes-master
- 对文本信息进行分类,训练和学习,利用朴素贝叶斯算法实现。-Text information on the classification, training and learning, with Naive Bayes algorithm.
DeepLearning-master
- 深度学习的概念源于人工神经网络的研究。含多隐层的多层感知器就是一种深度学习结构。深度学习通过组合低层特征形成更加抽象的高层表示属性类别或特征,以发现数据的分布式特征表示。[1] 深度学习的概念由Hinton等人于2006年提出。基于深信度网(DBN)提出非监督贪心逐层训练算法,为解决深层结构相关的优化难题带来希望,随后提出多层自动编码器深层结构。此外Lecun等人提出的卷积神经网络是第一个真正多层结构学习算法,它利用空间相对关系减少参数数目以提高训练性能。[1] 深度学习是机器
multiverso-master
- Multiverso is a parameter server based framework for training machine learning models on big data with numbers of machines. It is currently a standard C++ library and provides a series of friendly programming interfaces. With such easy-to-use APIs, m
KNN
- 本例程是一个完整的学习KNN算法的工程,使用VS2010+C#编程,含训练数据及测试数据-This routine is a complete learning KNN algorithm works using VS2010+ C# programming, including training data and test data
KNN
- KNN近邻算法分类程序,包含训练数据和测试数据.-KNN classification procedures, including training and testing data.
bayes
- 首先对数据进行拆分,分为测试集与训练集,通过训练集进行贝叶斯网络的建模,最后利用建立的模型进行预测或分类任务的R语言代码-First, the data is split into a training set and test set, Bayesian network modeling through the training set, and finally the use of the model to predict or classify tasks R language code
BayesClassify
- Requirements : 1) function fileOpen (user written) to open files : also uploaded 2)function strsplit1 also uploaded 3)training data, also uploaded
NLPLibSVM
- libsvm分词训练集的java版本。包括libsvm.jar以及训练集样本-Libsvm version of the Java word segmentation training set. Including libsvm.jar and training set samples
plot_cv_predict
- 等渗的插图对生成的数据回归。等张回归发现引入近似函数的同时最小化均方误差的训练数据。-An illustration of the isotonic regression on generated data. The isotonic regression finds a non-decreasing approximation of a function while minimizing the mean squared error on the training data.
adaboost
- 基于adaboost算法训练异常数据,进行数据挖掘,利用matlab进行编写。 -Training algorithm based adaboost abnormal data, data mining, using matlab be written.
k_nn
- kNN的思想:计算待分类的数据点与训练集所有样本点,取距离最近的k个样本;统计这k个样本的类别数量;根据多数表决方案,取数量最多的那一类作为待测样本的类别。距离度量可采用Euclidean distance,Manhattan distance和cosine。-kNN The idea is simple: the training set and calculated data points to be classified all sample points taken the neare
code
- (神经网络)多个隐含层的多层感知器网络训练数据得到网络,并使用测试数据统计所设计多层感知器的平均识别正确率-Multi layer perceptron network training data with multiple hidden layers is obtained, and the average recognition accuracy of the multi-layer perceptron is designed by using the test data statisti
code_BPMF
- 如何使它工作: 1。创建一个单独的目录,并将所有这些文件下载到相同的目录中 2。下载7个文件: *demo:主文件demo:PMF和贝叶斯PMF * PMF.m:训练的PMF模型 * bayespmf.m贝叶斯PMF模型实现吉布斯采样器。 * moviedata.mat样本数据包含三元组(user_id,movie_id,评分) * makematrix.m:辅助功能转换成大型矩阵的三元组。 * PRED.m:辅助功能使得预测验证集。 三.在Matlab只需运