Modifier and Type | Interface and Description |
---|---|
interface |
UpdateableClassifier
UpdateableClassifier is an interface for one type of Online learner.
|
interface |
WarmClassifier
This interface is meant for models that support efficient warm starting from
the solution of a previous model.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseUpdateableClassifier
A base implementation of the UpdateableClassifier.
|
class |
DDAG
Decision Directed Acyclic Graph (DDAG) classifier.
|
class |
MajorityVote
The Majority Vote classifier is a simple ensemble classifier.
|
class |
MultinomialLogisticRegression
Multinomial Logistic Regression is an extension of
LogisticRegression for classification when
there are more then two target classes. |
class |
OneVSAll
This classifier turns any classifier, specifically binary classifiers, into
multi-class classifiers.
|
class |
OneVSOne
A One VS One classifier extends binary decision classifiers into multi-class
decision classifiers.
|
class |
PriorClassifier
A Naive classifier that simply returns the prior probabilities as the
classification decision.
|
class |
RegressorToClassifier
This meta algorithm wraps a
Regressor to perform binary
classification. |
class |
Rocchio |
Modifier and Type | Field and Description |
---|---|
protected Classifier |
OneVSOne.baseClassifier
Main binary classifier
|
protected Classifier[][] |
OneVSOne.oneVone
Uper-diagonal matrix of classifiers sans the first index since a
classifier vs itself is useless.
|
Modifier and Type | Method and Description |
---|---|
Classifier |
PriorClassifier.clone() |
Classifier |
MajorityVote.clone() |
Classifier |
Classifier.clone() |
Classifier |
ClassificationModelEvaluation.getClassifier()
Returns the classifier that was original given for evaluation.
|
Classifier[] |
ClassificationModelEvaluation.getKeptModels()
Returns the models that were kept after the last evaluation.
|
Modifier and Type | Method and Description |
---|---|
void |
ClassificationModelEvaluation.setWarmModels(Classifier... warmModels)
Sets the models that will be used for warm starting training.
|
void |
WarmClassifier.trainC(ClassificationDataSet dataSet,
Classifier warmSolution)
Trains the classifier and constructs a model for classification using the
given data set.
|
void |
WarmClassifier.trainC(ClassificationDataSet dataSet,
Classifier warmSolution,
ExecutorService threadPool)
Trains the classifier and constructs a model for classification using the
given data set.
|
Constructor and Description |
---|
ClassificationModelEvaluation(Classifier classifier,
ClassificationDataSet dataSet)
Constructs a new object that can perform evaluations on the model.
|
ClassificationModelEvaluation(Classifier classifier,
ClassificationDataSet dataSet,
ExecutorService threadpool)
Constructs a new object that can perform evaluations on the model.
|
DDAG(Classifier baseClassifier)
Creates a new DDAG classifier to extend a binary classifier to handle multi-class problems.
|
DDAG(Classifier baseClassifier,
boolean concurrentTrain)
Creates a new DDAG classifier to extend a binary classifier to handle multi-class problems.
|
MajorityVote(Classifier... voters)
Creates a new Majority Vote classifier using the given voters.
|
OneVSAll(Classifier baseClassifier)
Creates a new One VS All classifier.
|
OneVSAll(Classifier baseClassifier,
boolean concurrentTraining)
Creates a new One VS All classifier.
|
OneVSOne(Classifier baseClassifier)
Creates a new One-vs-One classifier
|
OneVSOne(Classifier baseClassifier,
boolean concurrentTrain)
Creates a new One-vs-One classifier
|
Constructor and Description |
---|
MajorityVote(List<Classifier> voters)
Creates a new Majority Vote classifier using the given voters.
|
Modifier and Type | Class and Description |
---|---|
class |
AODE
Averaged One-Dependence Estimators (AODE) is an extension of Naive Bayes that
attempts to be more accurate by reducing the independence assumption.
|
class |
BestClassDistribution
BestClassDistribution is a generic class for performing classification by fitting a
MultivariateDistribution to each class. |
class |
ConditionalProbabilityTable
The conditional probability table (CPT) is a classifier for categorical attributes.
|
class |
MultinomialNaiveBayes
An implementation of the Multinomial Naive Bayes model (MNB).
|
class |
MultivariateNormals
This classifier can be seen as an extension of
NaiveBayes . |
class |
NaiveBayes
Provides an implementation of the Naive Bayes classifier that assumes numeric
features come from some continuous probability distribution.
|
class |
NaiveBayesUpdateable
An implementation of Gaussian Naive Bayes that can be updated in an online
fashion.
|
class |
ODE
One-Dependence Estimators (ODE) is an extension of Naive Bayes that, instead
of assuming all features are independent, assumes all features are dependent
on one other feature besides the target class.
|
Modifier and Type | Method and Description |
---|---|
Classifier |
NaiveBayes.clone() |
Classifier |
ConditionalProbabilityTable.clone() |
Classifier |
BestClassDistribution.clone() |
Modifier and Type | Class and Description |
---|---|
class |
DiscreteBayesNetwork
A class for representing a Baysian Network (BN) for discrete variables.
|
class |
K2NetworkLearner
An implementation of the K2 algorithm for learning the structure of a Bayesian Network.
|
Modifier and Type | Method and Description |
---|---|
Classifier |
DiscreteBayesNetwork.clone() |
Modifier and Type | Class and Description |
---|---|
class |
AdaBoostM1
Implementation of Experiments with a New Boosting Algorithm, by Yoav Freund&Robert E.
|
class |
AdaBoostM1PL
An extension to the original AdaBoostM1 algorithm for parallel training.
|
class |
ArcX4
Arc-x4 is a ensemble-classifier that performs re-weighting of the data points
based on the total number of errors that have occurred for the data point.
|
class |
Bagging
An implementation of Bootstrap Aggregating, as described by LEO BREIMAN in "Bagging Predictors".
|
class |
EmphasisBoost
Emphasis Boost is a generalization of the Real AdaBoost algorithm, expanding
the update term and providing the
λ term
to control the trade off. |
class |
LogitBoost
An implementation of the original 2 class LogitBoost algorithm.
|
class |
LogitBoostPL
An extension to the original LogitBoost algorithm for parallel training.
|
class |
ModestAdaBoost
Modest Ada Boost is a generalization of Discrete Ada Boost that attempts to
reduce the generalization error and avoid over-fitting.
|
class |
SAMME
This is an implementation of the Multi-Class AdaBoost method SAMME (Stagewise Additive Modeling using
a Multi-Class Exponential loss function), presented in Multi-class AdaBoost by Ji Zhu,
Saharon Rosset, Hui Zou,&Trevor Hasstie
This algorithm reduces to AdaBoostM1 for binary classification problems. |
class |
Stacking
This provides an implementation of the Stacking ensemble method.
|
class |
UpdatableStacking
This provides an implementation of the Stacking ensemble method meant for
Updatable models.
|
class |
Wagging
Wagging is a meta-classifier that is related to
Bagging . |
class |
WaggingNormal
Wagging using the
Normal distribution. |
Modifier and Type | Field and Description |
---|---|
protected List<Classifier> |
ModestAdaBoost.hypoths
The list of weak hypothesis
|
protected List<Classifier> |
EmphasisBoost.hypoths
The list of weak hypothesis
|
protected List<Classifier> |
AdaBoostM1.hypoths
The list of weak hypothesis
|
Modifier and Type | Method and Description |
---|---|
Classifier |
Wagging.getWeakClassifier()
Returns the weak learner used for classification.
|
Classifier |
ModestAdaBoost.getWeakLearner()
Returns the weak learner currently being used by this method.
|
Classifier |
EmphasisBoost.getWeakLearner()
Returns the weak learner currently being used by this method.
|
Classifier |
ArcX4.getWeakLearner()
Returns the weak learner used
|
Classifier |
AdaBoostM1.getWeakLearner()
Returns the weak learner currently being used by this method.
|
Modifier and Type | Method and Description |
---|---|
List<Classifier> |
SAMME.getModels() |
List<Classifier> |
ModestAdaBoost.getModels() |
List<Classifier> |
EmphasisBoost.getModels() |
List<Classifier> |
AdaBoostM1.getModels() |
Modifier and Type | Method and Description |
---|---|
void |
Wagging.setWeakLearner(Classifier weakL)
Sets the weak learner used for classification.
|
void |
ModestAdaBoost.setWeakLearner(Classifier weakLearner)
Sets the weak learner used during training.
|
void |
EmphasisBoost.setWeakLearner(Classifier weakLearner)
Sets the weak learner used during training.
|
void |
ArcX4.setWeakLearner(Classifier weakLearner)
Sets the weak learner used at each iteration of learning
|
void |
AdaBoostM1.setWeakLearner(Classifier weakLearner)
Sets the weak learner used during training.
|
Constructor and Description |
---|
AdaBoostM1(Classifier weakLearner,
int maxIterations) |
AdaBoostM1PL(Classifier weakLearner,
int maxIterations) |
ArcX4(Classifier weakLearner,
int iterations)
Creates a new Arc-X4 classifier
|
Bagging(Classifier baseClassifier)
Creates a new Bagger for classification.
|
Bagging(Classifier baseClassifier,
int extraSamples,
boolean simultaniousTraining)
Creates a new Bagger for classification.
|
Bagging(Classifier baseClassifier,
int extraSamples,
boolean simultaniousTraining,
int rounds,
Random random)
Creates a new Bagger for classification.
|
EmphasisBoost(Classifier weakLearner,
int maxIterations,
double lambda)
Creates a new EmphasisBoost learner
|
ModestAdaBoost(Classifier weakLearner,
int maxIterations)
Creates a new ModestBoost learner
|
SAMME(Classifier weakLearner,
int maxIterations) |
Stacking(Classifier aggregatingClassifier,
Classifier... baseClassifiers)
Creates a new Stacking classifier that uses 3 folds of cross validation
|
Stacking(Classifier aggregatingClassifier,
Classifier... baseClassifiers)
Creates a new Stacking classifier that uses 3 folds of cross validation
|
Stacking(Classifier aggregatingClassifier,
List<Classifier> baseClassifiers)
Creates a new Stacking classifier that uses 3 folds of cross validation
|
Stacking(int folds,
Classifier aggregatingClassifier,
Classifier... baseClassifiers)
Creates a new Stacking classifier
|
Stacking(int folds,
Classifier aggregatingClassifier,
Classifier... baseClassifiers)
Creates a new Stacking classifier
|
Stacking(int folds,
Classifier aggregatingClassifier,
List<Classifier> baseClassifiers)
Creates a new Stacking classifier
|
Wagging(ContinuousDistribution dist,
Classifier weakL,
int iterations)
Creates a new Wagging classifier
|
WaggingNormal(Classifier weakLearner,
int interations)
Creates a new Wagging classifier
|
Constructor and Description |
---|
Stacking(Classifier aggregatingClassifier,
List<Classifier> baseClassifiers)
Creates a new Stacking classifier that uses 3 folds of cross validation
|
Stacking(int folds,
Classifier aggregatingClassifier,
List<Classifier> baseClassifiers)
Creates a new Stacking classifier
|
Modifier and Type | Interface and Description |
---|---|
interface |
BinaryScoreClassifier
Many algorithms linear a binary separation between two classes A and
B by representing the target labels with a
-1 ad 1 . |
Modifier and Type | Class and Description |
---|---|
class |
BinaryCalibration
This abstract class provides the frame work for an algorithm to perform
probability calibration based on the outputs of a base learning algorithm for
binary classification problems.
|
class |
IsotonicCalibration
Isotonic Calibration is non-parametric, and only assumes that the underlying
distribution from negative to positive examples is strictly a non-decreasing
function.
|
class |
PlattCalibration
Platt Calibration essentially performs logistic regression on the output
scores of a model against their class labels.
|
Modifier and Type | Class and Description |
---|---|
class |
DANN
DANN is an implementation of Discriminant Adaptive Nearest Neighbor.
|
class |
LWL
Locally Weighted Learner (LW) is the combined generalized implementation of
Locally Weighted Regression (LWR) and Locally Weighted Naive Bayes (LWNB).
|
class |
NearestNeighbour
An implementation of the Nearest Neighbor algorithm, but with a
British spelling! How fancy.
|
Modifier and Type | Method and Description |
---|---|
Classifier |
DANN.clone() |
Constructor and Description |
---|
LWL(Classifier classifier,
int k,
DistanceMetric dm)
Creates a new LWL classifier
|
LWL(Classifier classifier,
int k,
DistanceMetric dm,
KernelFunction kf)
Creates a new LWL classifier
|
LWL(Classifier classifier,
int k,
DistanceMetric dm,
KernelFunction kf,
VectorCollectionFactory<VecPaired<Vec,Double>> vcf)
Creates a new LWL classifier
|
Modifier and Type | Class and Description |
---|---|
class |
ALMA2
Provides a linear implementation of the ALMAp algorithm for p = 2, which is
considerably more efficient to compute.
|
class |
AROW
An implementation of Adaptive Regularization of Weight Vectors (AROW), which
uses second order information to learn a large margin binary classifier.
|
class |
BBR
This is an implementation of Bayesian Binary Regression for L1 and
L2 regularized logistic regression.
|
class |
LinearBatch
LinearBatch learns either a classification or regression problem depending on
the
loss function ℓ(w,x)
used. |
class |
LinearL1SCD
Implements an iterative and single threaded form of fast
Stochastic Coordinate Decent for optimizing L1 regularized
linear regression problems.
|
class |
LinearSGD
LinearSGD learns either a classification or regression problem depending on
the
loss function ℓ(w,x)
used. |
class |
LogisticRegressionDCD
This provides an implementation of regularized logistic regression using Dual
Coordinate Descent.
|
class |
NewGLMNET
NewGLMNET is a batch method for solving Elastic Net regularized Logistic
Regression problems of the form
0.5 * (1-α) ||w||2 + α * ||w||1 + C * ∑Ni=1 ℓ (wT xi + b, yi). |
class |
NHERD
Implementation of the Normal Herd (NHERD) algorithm for learning a linear
binary classifier.
|
class |
PassiveAggressive
An implementations of the 3 versions of the Passive Aggressive algorithm for
binary classification and regression.
|
class |
ROMMA
Provides an implementation of the linear Relaxed online Maximum Margin
algorithm, which finds a similar solution to SVMs.
|
class |
SCD
Implementation of Stochastic Coordinate Descent for L1 regularized
classification and regression.
|
class |
SCW
Provides an Implementation of Confidence-Weighted (CW) learning and Soft
Confidence-Weighted (SCW), both of which are binary linear classifiers
inspired by
PassiveAggressive . |
class |
SMIDAS
Implements the iterative and single threaded stochastic solver for
L1 regularized linear regression problems SMIDAS (Stochastic
Mirror Descent Algorithm mAde Sparse).
|
class |
SPA
Support class Passive Aggressive (SPA) is a multi class generalization of
PassiveAggressive . |
class |
STGD
This provides an implementation of Sparse Truncated Gradient Descent for
L1 regularized linear classification and regression on sparse data
sets.
|
class |
StochasticMultinomialLogisticRegression
This is a Stochastic implementation of Multinomial Logistic Regression.
|
class |
StochasticSTLinearL1
This base class provides shared functionality and variables used by two
different training algorithms for L1 regularized linear models.
|
Modifier and Type | Method and Description |
---|---|
Classifier |
StochasticMultinomialLogisticRegression.clone() |
Classifier |
LogisticRegressionDCD.clone() |
Modifier and Type | Method and Description |
---|---|
void |
NewGLMNET.trainC(ClassificationDataSet dataSet,
Classifier warmSolution) |
void |
LinearBatch.trainC(ClassificationDataSet dataSet,
Classifier warmSolution) |
void |
NewGLMNET.trainC(ClassificationDataSet dataSet,
Classifier warmSolution,
ExecutorService threadPool) |
void |
LinearBatch.trainC(ClassificationDataSet D,
Classifier warmSolution,
ExecutorService threadPool) |
Modifier and Type | Class and Description |
---|---|
class |
ALMA2K
Provides a kernelized version of the
ALMA2 algorithm. |
class |
BOGD
Bounded Online Gradient Descent (BOGD) is a kernel learning algorithm that
uses a bounded number of support vectors.
|
class |
CSKLR
An implementation of Conservative Stochastic Kernel Logistic Regression.
|
class |
CSKLRBatch
An implementation of Conservative Stochastic Kernel Logistic Regression.
|
class |
DUOL
Provides an implementation of Double Update Online Learning (DUOL) algorithm.
|
class |
Forgetron
Implementation of the first two Forgetron algorithms.
|
class |
KernelSGD
Kernel SGD is the kernelized counterpart to
LinearSGD , and learns
nonlinear functions via the kernel trick. |
class |
OSKL
Online Sparse Kernel Learning by Sampling and Smooth Losses (OSKL) is an
online algorithm for learning sparse kernelized solutions to binary
classification problems.
|
class |
Projectron
An implementation of the Projectron and Projectrion++ algorithms.
|
Modifier and Type | Class and Description |
---|---|
class |
BackPropagationNet
An implementation of a Feed Forward Neural Network (NN) trained by Back
Propagation.
|
class |
DReDNetSimple
This class provides a neural network based on Geoffrey Hinton's
Deep Rectified Dropout Nets.
|
class |
LVQ
Learning Vector Quantization (LVQ) is an algorithm that extends
SOM
to take advantage of label information to perform classification. |
class |
LVQLLC
LVQ with Locally Learned Classifier (LVQ-LLC) is an adaption of the LVQ algorithm
I have come up with.
|
class |
Perceptron
The perceptron is a simple algorithm that attempts to find a hyperplane that
separates two classes.
|
class |
RBFNet
This provides a highly configurable implementation of a Radial Basis Function
Neural Network.
|
class |
SOM
An implementation of a Self Organizing Map, also called a Kohonen Map.
|
Modifier and Type | Method and Description |
---|---|
Classifier |
LVQLLC.getLocalClassifier()
Returns the classifier used for each prototype
|
Modifier and Type | Method and Description |
---|---|
void |
LVQLLC.setLocalClassifier(Classifier localClassifier)
Each prototype will create a classifier that is local to itself, and
trained on the points that belong to the prototype and those near the
border of the prototype.
|
Constructor and Description |
---|
LVQLLC(DistanceMetric dm,
int iterations,
Classifier localClasifier)
Creates a new LVQ-LLC instance
|
LVQLLC(DistanceMetric dm,
int iterations,
Classifier localClasifier,
double learningRate,
int representativesPerClass)
Creates a new LVQ-LLC instance
|
LVQLLC(DistanceMetric dm,
int iterations,
Classifier localClasifier,
double learningRate,
int representativesPerClass,
LVQ.LVQVersion lvqVersion,
DecayRate learningDecay)
Creates a new LVQ-LLC instance
|
RBFNet(int numCentroids,
RBFNet.Phase1Learner cl,
RBFNet.Phase2Learner bl,
double alpha,
int p,
DistanceMetric dm,
Classifier baseClassifier)
Creates a new RBF Network for classification tasks.
|
Modifier and Type | Class and Description |
---|---|
class |
DCD
Implements Dual Coordinate Descent (DCD) training algorithms for a Linear
L1 or L2 Support Vector Machine for binary
classification and regression.
|
class |
DCDs
Implements Dual Coordinate Descent with shrinking (DCDs) training algorithms
for a Linear L1 or L2 Support Vector Machine for binary
classification and regression.
|
class |
DCSVM
This is an implementation of the Divide-and-Conquer Support Vector Machine
(DC-SVM).
|
class |
LSSVM
The Least Squares Support Vector Machine (LS-SVM) is an alternative to the
standard SVM classifier for regression and binary classification problems.
|
class |
Pegasos
Implements the linear kernel mini-batch version of the Pegasos SVM
classifier.
|
class |
PegasosK
Implements the kernelized version of the
Pegasos algorithm for SVMs. |
class |
PlattSMO
An implementation of SVMs using Platt's Sequential Minimum Optimization (SMO)
for both Classification and Regression problems.
|
class |
SBP
Implementation of the Stochastic Batch Perceptron (SBP) algorithm.
|
class |
SVMnoBias
This class implements a version of the Support Vector Machine without a bias
term.
|
Modifier and Type | Method and Description |
---|---|
void |
PlattSMO.trainC(ClassificationDataSet dataSet,
Classifier warmSolution) |
void |
LSSVM.trainC(ClassificationDataSet dataSet,
Classifier warmSolution) |
void |
DCDs.trainC(ClassificationDataSet dataSet,
Classifier warmSolution) |
void |
PlattSMO.trainC(ClassificationDataSet dataSet,
Classifier warmSolution,
ExecutorService threadPool) |
void |
LSSVM.trainC(ClassificationDataSet dataSet,
Classifier warmSolution,
ExecutorService threadPool) |
void |
DCDs.trainC(ClassificationDataSet dataSet,
Classifier warmSolution,
ExecutorService threadPool) |
Modifier and Type | Class and Description |
---|---|
class |
AMM
This is the batch variant of the Adaptive Multi-Hyperplane Machine (AMM)
algorithm.
|
class |
CPM
This class implements the Convex Polytope Machine (CPM), which is an
extension of the Linear SVM.
|
class |
OnlineAMM
This is the Online variant of the Adaptive Multi-Hyperplane Machine (AMM)
algorithm.
|
Modifier and Type | Class and Description |
---|---|
class |
DecisionStump
This class is a 1-rule.
|
class |
DecisionTree
Creates a decision tree from
DecisionStumps . |
class |
ERTrees
Extra Randomized Trees (ERTrees) is an ensemble method built on top of
ExtraTree . |
class |
ExtraTree
The ExtraTree is an Extremely Randomized Tree.
|
class |
ID3 |
class |
RandomDecisionTree
An extension of Decision Trees, it ignores the given set of features to use-
and selects a new random subset of features at each node for use.
|
class |
RandomForest
Random Forest is an extension of
Bagging that is applied only to
DecisionTrees . |
Modifier and Type | Method and Description |
---|---|
Classifier |
ID3.clone() |
Modifier and Type | Class and Description |
---|---|
class |
DataModelPipeline
A Data Model Pipeline combines several data transforms and a base Classifier
or Regressor into a unified object for performing classification and
Regression with.
|
Constructor and Description |
---|
DataModelPipeline(Classifier baseClassifier,
DataTransform... transforms)
Creates a new Data Model Pipeline from the given transform factories and
base classifier
|
DataModelPipeline(Classifier baseClassifier,
DataTransformProcess dtp)
Creates a new Data Model Pipeline from the given transform process and
base classifier
|
Constructor and Description |
---|
BDS(int featureCount,
ClassificationDataSet dataSet,
Classifier evaluator,
int folds)
Performs BDS feature selection for a classification problem
|
BDS(int featureCount,
Classifier evaluator,
int folds)
Creates a BDS feature selection for a classification problem
|
LRS(int L,
int R,
ClassificationDataSet cds,
Classifier evaluater,
int folds)
Performs LRS feature selection for a classification problem
|
LRS(int L,
int R,
Classifier evaluater,
int folds)
Creates a LRS feature selection object for a classification problem
|
SBS(int minFeatures,
int maxFeatures,
ClassificationDataSet cds,
Classifier evaluater,
int folds,
double maxDecrease)
Performs SBS feature selection for a classification problem
|
SBS(int minFeatures,
int maxFeatures,
Classifier evaluater,
double maxDecrease)
Performs SBS feature selection for a classification problem
|
SFS(int minFeatures,
int maxFeatures,
ClassificationDataSet dataSet,
Classifier evaluater,
int folds,
double maxIncrease)
Performs SFS feature selection for a classification problem
|
SFS(int minFeatures,
int maxFeatures,
Classifier evaluater,
double maxIncrease)
Performs SFS feature selection for a classification problem
|
Modifier and Type | Class and Description |
---|---|
class |
GridSearch
GridSearch is a simple method for tuning the parameters of a classification
or regression algorithm.
|
class |
ModelSearch
This abstract class provides boilerplate for algorithms that search a model's
parameter space to find the parameters that provide the best overall
performance.
|
class |
RandomSearch
Random Search is a simple method for tuning the parameters of a
classification or regression algorithm.
|
Modifier and Type | Field and Description |
---|---|
protected Classifier |
ModelSearch.baseClassifier |
protected Classifier |
ModelSearch.trainedClassifier |
Modifier and Type | Method and Description |
---|---|
Classifier |
ModelSearch.getBaseClassifier()
Returns the base classifier that was originally passed in when
constructing this GridSearch.
|
Classifier |
ModelSearch.getTrainedClassifier()
Returns the resultant classifier trained on the whole data set after
performing parameter tuning.
|
Constructor and Description |
---|
GridSearch(Classifier baseClassifier,
int folds)
Creates a new GridSearch to tune the specified parameters of a
classification model.
|
ModelSearch(Classifier baseClassifier,
int folds) |
RandomSearch(Classifier baseClassifier,
int folds)
Creates a new GridSearch to tune the specified parameters of a
classification model.
|
Modifier and Type | Class and Description |
---|---|
class |
LogisticRegression
Logistic regression is a common method used to fit a probability between binary outputs.
|
Copyright © 2017. All rights reserved.