public class EpsilonInsensitiveLoss extends Object implements LossR
AbsoluteLoss
.Constructor and Description |
---|
EpsilonInsensitiveLoss(double eps)
Creates a new Epsilon Insensitive loss
|
EpsilonInsensitiveLoss(EpsilonInsensitiveLoss toCopy)
Copy constructor
|
Modifier and Type | Method and Description |
---|---|
EpsilonInsensitiveLoss |
clone() |
static double |
deriv(double pred,
double y,
double eps)
Computes the first derivative of the ε-insensitive loss
|
double |
getDeriv(double pred,
double y)
Computes the first derivative of the getLoss function.
|
double |
getDeriv2(double pred,
double y)
Computes the second derivative of the getLoss function.
|
double |
getDeriv2Max()
Returns an upper bound on the maximum value of the second derivative.
|
double |
getLoss(double pred,
double y)
Computes the getLoss for a regression problem.
|
double |
getRegression(double score)
Given the score value of a data point, this returns the correct numeric
result.
|
static double |
loss(double pred,
double y,
double eps)
Computes the ε-insensitive loss
|
public EpsilonInsensitiveLoss(double eps)
eps
- the epsilon tolerance on errorpublic EpsilonInsensitiveLoss(EpsilonInsensitiveLoss toCopy)
toCopy
- the object to copypublic static double loss(double pred, double y, double eps)
pred
- the predicted valuey
- the true valueeps
- the epsilon tolerancepublic static double deriv(double pred, double y, double eps)
pred
- the predicted valuey
- the true valueeps
- the epsilon tolerancepublic double getLoss(double pred, double y)
LossR
public double getDeriv(double pred, double y)
LossR
public double getDeriv2(double pred, double y)
LossR
public double getDeriv2Max()
LossFunc
Double.NaN
is a valid
result. It is also possible for 0
and
Double.POSITIVE_INFINITY
to be valid results, and must be checked
for.getDeriv2Max
in interface LossFunc
LossFunc.getDeriv2(double, double)
public EpsilonInsensitiveLoss clone()
public double getRegression(double score)
LossR
getRegression
in interface LossR
score
- the score for a data pointCopyright © 2017. All rights reserved.