Package | Description |
---|---|
jsat.classifiers.linear | |
jsat.classifiers.neuralnetwork | |
jsat.math.optimization.stochastic |
Modifier and Type | Method and Description |
---|---|
GradientUpdater |
LinearSGD.getGradientUpdater() |
Modifier and Type | Method and Description |
---|---|
void |
LinearSGD.setGradientUpdater(GradientUpdater gradientUpdater)
Sets the method that will be used to update the weight vectors given
their gradient information.
|
Modifier and Type | Method and Description |
---|---|
GradientUpdater |
SGDNetworkTrainer.getGradientUpdater() |
Modifier and Type | Method and Description |
---|---|
void |
SGDNetworkTrainer.setGradientUpdater(GradientUpdater updater)
Sets the gradient update that will be used when updating the weight
matrices and bias terms.
|
Modifier and Type | Class and Description |
---|---|
class |
AdaDelta
AdaDelta is inspired by
AdaGrad and was developed for use primarily
in neural networks. |
class |
AdaGrad
AdaGrad provides an adaptive learning rate for each individual feature
See: Duchi, J., Hazan, E.,&Singer, Y. |
class |
Adam
|
class |
NAdaGrad
Normalized AdaGrad provides an adaptive learning rate for each individual
feature, and is mostly scale invariant to the data distribution.
|
class |
RMSProp
rmsprop is an adpative learning weight scheme proposed by Geoffrey Hinton.
|
class |
Rprop
The Rprop algorithm provides adaptive learning rates using only first order
information.
|
class |
SGDMomentum
Performs unaltered Stochastic Gradient Decent updates using either standard
or Nestrov momentum.
|
class |
SimpleSGD
Performs unaltered Stochastic Gradient Decent updates computing
x = x- η grad
Because the SimpleSGD requires no internal state, it is not necessary to call SimpleSGD.setup(int) . |
Modifier and Type | Method and Description |
---|---|
GradientUpdater |
GradientUpdater.clone() |
Copyright © 2017. All rights reserved.