Modifier and Type | Class and Description |
---|---|
class |
DDAG
Decision Directed Acyclic Graph (DDAG) classifier.
|
class |
OneVSAll
This classifier turns any classifier, specifically binary classifiers, into
multi-class classifiers.
|
class |
OneVSOne
A One VS One classifier extends binary decision classifiers into multi-class
decision classifiers.
|
class |
RegressorToClassifier
This meta algorithm wraps a
Regressor to perform binary
classification. |
Modifier and Type | Class and Description |
---|---|
class |
BestClassDistribution
BestClassDistribution is a generic class for performing classification by fitting a
MultivariateDistribution to each class. |
class |
MultinomialNaiveBayes
An implementation of the Multinomial Naive Bayes model (MNB).
|
class |
MultivariateNormals
This classifier can be seen as an extension of
NaiveBayes . |
class |
NaiveBayes
Provides an implementation of the Naive Bayes classifier that assumes numeric
features come from some continuous probability distribution.
|
Modifier and Type | Class and Description |
---|---|
class |
AdaBoostM1
Implementation of Experiments with a New Boosting Algorithm, by Yoav Freund&Robert E.
|
class |
AdaBoostM1PL
An extension to the original AdaBoostM1 algorithm for parallel training.
|
class |
ArcX4
Arc-x4 is a ensemble-classifier that performs re-weighting of the data points
based on the total number of errors that have occurred for the data point.
|
class |
Bagging
An implementation of Bootstrap Aggregating, as described by LEO BREIMAN in "Bagging Predictors".
|
class |
EmphasisBoost
Emphasis Boost is a generalization of the Real AdaBoost algorithm, expanding
the update term and providing the
λ term
to control the trade off. |
class |
LogitBoost
An implementation of the original 2 class LogitBoost algorithm.
|
class |
LogitBoostPL
An extension to the original LogitBoost algorithm for parallel training.
|
class |
ModestAdaBoost
Modest Ada Boost is a generalization of Discrete Ada Boost that attempts to
reduce the generalization error and avoid over-fitting.
|
class |
SAMME
This is an implementation of the Multi-Class AdaBoost method SAMME (Stagewise Additive Modeling using
a Multi-Class Exponential loss function), presented in Multi-class AdaBoost by Ji Zhu,
Saharon Rosset, Hui Zou,&Trevor Hasstie
This algorithm reduces to AdaBoostM1 for binary classification problems. |
class |
Wagging
Wagging is a meta-classifier that is related to
Bagging . |
class |
WaggingNormal
Wagging using the
Normal distribution. |
Modifier and Type | Class and Description |
---|---|
class |
BinaryCalibration
This abstract class provides the frame work for an algorithm to perform
probability calibration based on the outputs of a base learning algorithm for
binary classification problems.
|
class |
IsotonicCalibration
Isotonic Calibration is non-parametric, and only assumes that the underlying
distribution from negative to positive examples is strictly a non-decreasing
function.
|
class |
PlattCalibration
Platt Calibration essentially performs logistic regression on the output
scores of a model against their class labels.
|
Modifier and Type | Class and Description |
---|---|
class |
DANN
DANN is an implementation of Discriminant Adaptive Nearest Neighbor.
|
class |
LWL
Locally Weighted Learner (LW) is the combined generalized implementation of
Locally Weighted Regression (LWR) and Locally Weighted Naive Bayes (LWNB).
|
class |
NearestNeighbour
An implementation of the Nearest Neighbor algorithm, but with a
British spelling! How fancy.
|
Modifier and Type | Class and Description |
---|---|
class |
AROW
An implementation of Adaptive Regularization of Weight Vectors (AROW), which
uses second order information to learn a large margin binary classifier.
|
class |
BBR
This is an implementation of Bayesian Binary Regression for L1 and
L2 regularized logistic regression.
|
class |
LinearBatch
LinearBatch learns either a classification or regression problem depending on
the
loss function ℓ(w,x)
used. |
class |
LinearL1SCD
Implements an iterative and single threaded form of fast
Stochastic Coordinate Decent for optimizing L1 regularized
linear regression problems.
|
class |
LinearSGD
LinearSGD learns either a classification or regression problem depending on
the
loss function ℓ(w,x)
used. |
class |
LogisticRegressionDCD
This provides an implementation of regularized logistic regression using Dual
Coordinate Descent.
|
class |
NewGLMNET
NewGLMNET is a batch method for solving Elastic Net regularized Logistic
Regression problems of the form
0.5 * (1-α) ||w||2 + α * ||w||1 + C * ∑Ni=1 ℓ (wT xi + b, yi). |
class |
NHERD
Implementation of the Normal Herd (NHERD) algorithm for learning a linear
binary classifier.
|
class |
PassiveAggressive
An implementations of the 3 versions of the Passive Aggressive algorithm for
binary classification and regression.
|
class |
SCD
Implementation of Stochastic Coordinate Descent for L1 regularized
classification and regression.
|
class |
SCW
Provides an Implementation of Confidence-Weighted (CW) learning and Soft
Confidence-Weighted (SCW), both of which are binary linear classifiers
inspired by
PassiveAggressive . |
class |
SMIDAS
Implements the iterative and single threaded stochastic solver for
L1 regularized linear regression problems SMIDAS (Stochastic
Mirror Descent Algorithm mAde Sparse).
|
class |
SPA
Support class Passive Aggressive (SPA) is a multi class generalization of
PassiveAggressive . |
class |
STGD
This provides an implementation of Sparse Truncated Gradient Descent for
L1 regularized linear classification and regression on sparse data
sets.
|
class |
StochasticMultinomialLogisticRegression
This is a Stochastic implementation of Multinomial Logistic Regression.
|
class |
StochasticSTLinearL1
This base class provides shared functionality and variables used by two
different training algorithms for L1 regularized linear models.
|
Modifier and Type | Class and Description |
---|---|
class |
ALMA2K
Provides a kernelized version of the
ALMA2 algorithm. |
class |
BOGD
Bounded Online Gradient Descent (BOGD) is a kernel learning algorithm that
uses a bounded number of support vectors.
|
class |
CSKLR
An implementation of Conservative Stochastic Kernel Logistic Regression.
|
class |
CSKLRBatch
An implementation of Conservative Stochastic Kernel Logistic Regression.
|
class |
DUOL
Provides an implementation of Double Update Online Learning (DUOL) algorithm.
|
class |
Forgetron
Implementation of the first two Forgetron algorithms.
|
class |
KernelSGD
Kernel SGD is the kernelized counterpart to
LinearSGD , and learns
nonlinear functions via the kernel trick. |
class |
OSKL
Online Sparse Kernel Learning by Sampling and Smooth Losses (OSKL) is an
online algorithm for learning sparse kernelized solutions to binary
classification problems.
|
class |
Projectron
An implementation of the Projectron and Projectrion++ algorithms.
|
Modifier and Type | Class and Description |
---|---|
class |
BackPropagationNet
An implementation of a Feed Forward Neural Network (NN) trained by Back
Propagation.
|
class |
DReDNetSimple
This class provides a neural network based on Geoffrey Hinton's
Deep Rectified Dropout Nets.
|
class |
LVQ
Learning Vector Quantization (LVQ) is an algorithm that extends
SOM
to take advantage of label information to perform classification. |
class |
LVQLLC
LVQ with Locally Learned Classifier (LVQ-LLC) is an adaption of the LVQ algorithm
I have come up with.
|
class |
RBFNet
This provides a highly configurable implementation of a Radial Basis Function
Neural Network.
|
class |
SOM
An implementation of a Self Organizing Map, also called a Kohonen Map.
|
Modifier and Type | Class and Description |
---|---|
class |
DCD
Implements Dual Coordinate Descent (DCD) training algorithms for a Linear
L1 or L2 Support Vector Machine for binary
classification and regression.
|
class |
DCDs
Implements Dual Coordinate Descent with shrinking (DCDs) training algorithms
for a Linear L1 or L2 Support Vector Machine for binary
classification and regression.
|
class |
DCSVM
This is an implementation of the Divide-and-Conquer Support Vector Machine
(DC-SVM).
|
class |
LSSVM
The Least Squares Support Vector Machine (LS-SVM) is an alternative to the
standard SVM classifier for regression and binary classification problems.
|
class |
Pegasos
Implements the linear kernel mini-batch version of the Pegasos SVM
classifier.
|
class |
PegasosK
Implements the kernelized version of the
Pegasos algorithm for SVMs. |
class |
PlattSMO
An implementation of SVMs using Platt's Sequential Minimum Optimization (SMO)
for both Classification and Regression problems.
|
class |
SBP
Implementation of the Stochastic Batch Perceptron (SBP) algorithm.
|
Modifier and Type | Class and Description |
---|---|
class |
AMM
This is the batch variant of the Adaptive Multi-Hyperplane Machine (AMM)
algorithm.
|
class |
CPM
This class implements the Convex Polytope Machine (CPM), which is an
extension of the Linear SVM.
|
class |
OnlineAMM
This is the Online variant of the Adaptive Multi-Hyperplane Machine (AMM)
algorithm.
|
Modifier and Type | Class and Description |
---|---|
class |
DecisionStump
This class is a 1-rule.
|
class |
DecisionTree
Creates a decision tree from
DecisionStumps . |
class |
ERTrees
Extra Randomized Trees (ERTrees) is an ensemble method built on top of
ExtraTree . |
class |
ExtraTree
The ExtraTree is an Extremely Randomized Tree.
|
class |
RandomDecisionTree
An extension of Decision Trees, it ignores the given set of features to use-
and selects a new random subset of features at each node for use.
|
class |
RandomForest
Random Forest is an extension of
Bagging that is applied only to
DecisionTrees . |
Modifier and Type | Class and Description |
---|---|
class |
FLAME
Provides an implementation of the FLAME clustering algorithm.
|
class |
GapStatistic
This class implements a method for estimating the number of clusters in a
data set called the Gap Statistic.
|
class |
HDBSCAN
HDBSCAN is a density based clustering algorithm that is an improvement over
DBSCAN . |
class |
LSDBC
A parallel implementation of Locally Scaled Density Based Clustering.
|
class |
OPTICS
An Implementation of the OPTICS algorithm, which is a generalization of
DBSCAN . |
Modifier and Type | Class and Description |
---|---|
class |
ElkanKernelKMeans
An efficient implementation of the K-Means algorithm.
|
class |
ElkanKMeans
An efficient implementation of the K-Means algorithm.
|
class |
GMeans
This class provides a method of performing
KMeans clustering when the
value of K is not known. |
class |
HamerlyKMeans
An efficient implementation of the K-Means algorithm.
|
class |
KernelKMeans
Base class for various Kernel K Means implementations.
|
class |
KMeans
Base class for the numerous implementations of k-means that exist.
|
class |
KMeansPDN
This class provides a method of performing
KMeans clustering when the
value of K is not known. |
class |
LloydKernelKMeans
An implementation of the naive algorithm for performing kernel k-means.
|
class |
NaiveKMeans
An implementation of Lloyd's K-Means clustering algorithm using the
naive algorithm.
|
class |
XMeans
This class provides a method of performing
KMeans clustering when the
value of K is not known. |
Modifier and Type | Class and Description |
---|---|
class |
DataModelPipeline
A Data Model Pipeline combines several data transforms and a base Classifier
or Regressor into a unified object for performing classification and
Regression with.
|
class |
DataTransformBase
This abstract class implements the Parameterized interface to ease the
development of simple Data Transforms.
|
class |
DataTransformProcess
Performing a transform on the whole data set before training a classifier can
add bias to the results.
|
class |
JLTransform
The Johnson-Lindenstrauss (JL) Transform is a type of random projection down
to a lower dimensional space.
|
class |
WhitenedPCA
An extension of
PCA that attempts to capture the variance, and make
the variables in the output space independent from each-other. |
class |
WhitenedZCA
An extension of
WhitenedPCA , is the Whitened Zero Component Analysis. |
Modifier and Type | Class and Description |
---|---|
class |
KernelPCA
A kernelized implementation of
PCA . |
class |
Nystrom
An implementation of the Nystrom approximation for any Kernel Trick.
|
class |
RFF_RBF
An Implementation of Random Fourier Features for the
RBFKernel . |
Modifier and Type | Interface and Description |
---|---|
interface |
KernelTrick
The KernelTrick is a method can can be used to alter an algorithm to do its
calculations in a projected feature space, without explicitly forming the
features.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseKernelTrick
This provides a simple base implementation for the cache related methods in
Kernel Trick.
|
class |
BaseL2Kernel
Many Kernels can be described in terms the L2 norm with some operations
performed on it.
|
class |
DistanceMetricBasedKernel
This abstract class provides the means of implementing a Kernel based off
some
DistanceMetric . |
class |
GeneralRBFKernel
This class provides a generalization of the
RBFKernel to arbitrary
distance metrics , and is of the form
exp(-d(x, y)2/(2 σ 2
)). |
class |
LinearKernel
Provides a linear kernel function, which computes the normal dot product.
|
class |
NormalizedKernel
This provides a wrapper kernel that produces a normalized kernel trick from
any input kernel trick.
|
class |
PolynomialKernel
Provides a Polynomial Kernel of the form
k(x,y) = (alpha * x.y + c)^d |
class |
PukKernel
The PUK kernel is an alternative to the RBF Kernel.
|
class |
RationalQuadraticKernel
Provides an implementation of the Rational Quadratic Kernel, which is of the
form:
k(x, y) = 1 - ||x-y||2 / (||x-y||2 + c) |
class |
RBFKernel
Provides a kernel for the Radial Basis Function, which is of the form
k(x, y) = exp(-||x-y||2/(2*σ2)) |
class |
SigmoidKernel
Provides an implementation of the Sigmoid (Hyperbolic Tangent) Kernel, which
is of the form:
k(x, y) = tanh(alpha * < x, y > +c) Technically, this kernel is not positive definite. |
Modifier and Type | Class and Description |
---|---|
class |
MetricKDE
MetricKDE is a generalization of the
KernelDensityEstimator to the multivariate case. |
Modifier and Type | Class and Description |
---|---|
class |
ExponetialDecay
The Exponential Decay requires the maximum time step to be explicitly known ahead
of time.
|
class |
InverseDecay
Decays an input by the inverse of the amount of time that has occurred, the
max time being irrelevant.
|
class |
LinearDecay
The Linear Decay requires the maximum time step to be explicitly known ahead
of time.
|
class |
PowerDecay
Decays an input by power of the amount of time that has occurred, the
max time being irrelevant.
|
Modifier and Type | Class and Description |
---|---|
class |
KernelRidgeRegression
A kernelized implementation of Ridge Regression.
|
class |
KernelRLS
Provides an implementation of the Kernel Recursive Least Squares algorithm.
|
class |
NadarayaWatson
The Nadaraya-Watson regressor uses the
Kernel Density Estimator to perform regression on a data set. |
class |
OrdinaryKriging
An implementation of Ordinary Kriging with support for a uniform error
measurement.
|
class |
RANSAC
RANSAC is a randomized meta algorithm that is useful for fitting a model to a
data set that has a large amount of outliers that do not represent the true
distribution well.
|
class |
RidgeRegression
An implementation of Ridge Regression that finds the exact solution.
|
class |
StochasticGradientBoosting
An implementation of Stochastic Gradient Boosting (SGB) for the Squared Error
loss.
|
class |
StochasticRidgeRegression
A Stochastic implementation of Ridge Regression.
|
Modifier and Type | Class and Description |
---|---|
class |
OnlineLDAsvi
This class provides an implementation of Latent Dirichlet Allocation
for learning a topic model from a set of documents.
|
Copyright © 2017. All rights reserved.