Interface | Description |
---|---|
GradientUpdater |
This interface defines the method of updating some weight vector using a
gradient and a learning rate.
|
Class | Description |
---|---|
AdaDelta |
AdaDelta is inspired by
AdaGrad and was developed for use primarily
in neural networks. |
AdaGrad |
AdaGrad provides an adaptive learning rate for each individual feature
See: Duchi, J., Hazan, E.,&Singer, Y. |
Adam | |
NAdaGrad |
Normalized AdaGrad provides an adaptive learning rate for each individual
feature, and is mostly scale invariant to the data distribution.
|
RMSProp |
rmsprop is an adpative learning weight scheme proposed by Geoffrey Hinton.
|
Rprop |
The Rprop algorithm provides adaptive learning rates using only first order
information.
|
SGDMomentum |
Performs unaltered Stochastic Gradient Decent updates using either standard
or Nestrov momentum.
|
SimpleSGD |
Performs unaltered Stochastic Gradient Decent updates computing
x = x- η grad
Because the SimpleSGD requires no internal state, it is not necessary to call SimpleSGD.setup(int) . |
Copyright © 2017. All rights reserved.