Package | Description |
---|---|
jsat.classifiers.linear | |
jsat.classifiers.neuralnetwork | |
jsat.math.optimization.stochastic |
Class and Description |
---|
GradientUpdater
This interface defines the method of updating some weight vector using a
gradient and a learning rate.
|
Class and Description |
---|
GradientUpdater
This interface defines the method of updating some weight vector using a
gradient and a learning rate.
|
Class and Description |
---|
AdaDelta
AdaDelta is inspired by
AdaGrad and was developed for use primarily
in neural networks. |
AdaGrad
AdaGrad provides an adaptive learning rate for each individual feature
See: Duchi, J., Hazan, E.,&Singer, Y. |
Adam |
GradientUpdater
This interface defines the method of updating some weight vector using a
gradient and a learning rate.
|
NAdaGrad
Normalized AdaGrad provides an adaptive learning rate for each individual
feature, and is mostly scale invariant to the data distribution.
|
RMSProp
rmsprop is an adpative learning weight scheme proposed by Geoffrey Hinton.
|
Rprop
The Rprop algorithm provides adaptive learning rates using only first order
information.
|
SGDMomentum
Performs unaltered Stochastic Gradient Decent updates using either standard
or Nestrov momentum.
|
SimpleSGD
Performs unaltered Stochastic Gradient Decent updates computing
x = x- η grad
Because the SimpleSGD requires no internal state, it is not necessary to call SimpleSGD.setup(int) . |
Copyright © 2017. All rights reserved.