搜索资源列表
N5
- training data in a dataset
kwiener
- The following code implements a kernel Wiener Filter algorithm in MATLAB. The algorithm dependes on the eigenvalue decomposition, thus only a few thousand of data samples for training dataset is applicable so far. -The following code implement
winerfilter
- The following code implements a kernel wiener filter algorithm in MATLAB.The algorithm dependes on the eigenvalue decomposition, thus only a few thousand of data samples for training dataset is applicable so far.
neuralnetworkforirisdataset
- traning and testing using neural network based on iris dataset
20064817924orl_faces_112x92
- ORL人脸图像库,共40人,每人10幅图像,其中每人的前5幅作为训练样本,后5幅作为测试分类样本,统计正确分类率。分类准则为最近邻规则。 真实的图像尺寸为112x92,列向量堆积对应人脸库矩阵的每一列。 -ORL face image database, a total of 40 per 10 images, each of which the first five as training samples, after the 5 categories as a test sampl
SVM_FACE
- 基于支持向量机的人脸检测训练集增强算法实现。根据支持向量机(support vector machine,简称SVM)~ ,对基于边界的分类算"~(geometric approach)~ 言,类别边界附近的样本通常比其他样本包含有更多的分类信息.基于这一基本思路,以人脸检测问题为例.探讨了 对给定训练样本集进行边界增强的问题,并为此而提出了一种基于支持向量机和改进的非线性精简集算法 IRS(improved reduced set)的训练集边界样本增强算法,用以扩大-91l练集并改
train-images-idx3-ubyte
- MNIST数据集中图像数据文件, 60000个训练集-The MNIST dataset image data files, 60000 training set
pvm_code
- PVM is a fast supvervised leanring algorithm who combine dimensioin reduction and neural network training. I have prepared the code (including six algorithms KPVM, EL M/SVD, BP/SVD and BPVM, and one dataset "Face") and put them in one zip file "pvm
KNN
- Implement the K nearest neighbor algorithm by your own instead of using available software. 2. Use K-fold cross validation to generate training and testing datasets. You should try different K values (3~8) to see how they affect your result. 3. T
svmTrain
- [model] = SVMTRAIN(X, Y, C, kernelFunction, tol, max_passes) trains an SVM classifier and returns trained model. X is the matrix of training examples. Each row is a training example, and the jth column holds the jth feature. Y is a column
DeepNeuralNetwork20131115
- It provides deep learning tools of deep belief networks (DBNs).-Run testDNN to try! Each function includes descr iption. Please check it! It provides deep learning tools of deep belief networks (DBNs) of stacked restricted Boltzmann machines (RB
深度神经网络
- It provides deep learning tools of deep belief networks (DBNs).-Run testDNN to try! Each function includes descr iption. Please check it! It provides deep learning tools of deep belief networks (DBNs) of stacked restricted Boltzmann machines (RBMs).
NB_for_text_classification
- 文本分类:朴素贝叶斯分类器例子,采用Multi-Variate Bernoulli Event Model。一个文件为训练,一个文件为测试,采用20newsgroups数据集。-Text classification: Naive Bayes classifier example, the use of Multi-Variate Bernoulli Event Model. A file for training, a file for testing, using 20newsgroups
cross-validation
- 交叉验证(Cross-validation)主要用于建模应用中,例如PCR 、PLS 回归建模中。在给定的建模样本中,拿出大部分样本进行建模型,留小部分样本用刚建立的模型进行预报,并求这小部分样本的预报误差,记录它们的平方加和。这个过程一直进行,直到所有的样本都被预报了一次而且仅被预报一次。把每个样本的预报误差平方加和,称为PRESS(predicted Error Sum of Squares)-Cross-validation, sometimes called rotation estim
diversity
- In qsar studies, to make sure that the structures of the training and test sets represent those of the whole dataset, diversity analysis can be done on the dataset
image-dog
- 这是斯坦福大学深度学习中对于狗的图像处理的相关程序,想要了解更多,下面有网址!-Stanford Dogs Dataset - For more information about the dataset, please visit the dataset website: http://vision.stanford.edu/aditya86/ImageNetDogs/ If you use this dataset in a publica
mlclass-ex3
- 多分类学习及神经网络,机器学习相关,基于matlab计算-ex3.m- Octave scr ipt that will help step you through part 1 ex3 nn.m- Octave scr ipt that will help step you through part 2 ex3data1.mat- Training set of hand-written digits ex3weights.mat- Initial weights for the
pos
- 世界上名列前茅的人形数据集,用它训练的结果效果是有目共睹的。(person dataset used in classification training)
charSamples
- 车牌识别的训练字符库,包括10个数字,24个英文字符(License plate recognition training character library, including 10 numbers, 24 English characters)
GeoDetector_2015_Example(Toy Dataset)D3
- 简易处理空间异质性问题。需要做好分层准备。(Simple training network can deal with spatial heterogeneity easily. We need to be well prepared.)