Optimization algorithms are used by neural networks to help minimize an error function by modifying the model’s internal learnable parameters. The IDLmloptAdam (Adaptive Moment Estimation) optimizer combines the ideas behind two simpler optimizers: IDLmloptMomentum and IDLmloptRMSProp. The Beta1 and Beta2 arguments control how far back to look in the past to predict the step size for the future.

Example


Compile_opt idl2
Optimizer = IDLmloptAdam(0.1, 0.9, 0.999)
Print, Optimizer(0.1, 0.1, 0.1)

Note: Though the above is how an optimizer can be used as a standalone object, the common scenario is to pass it to the IDLmlFeedForwardNeuralNetwork::Train() method via the OPTIMIZER keyword.

Syntax


Optimizer = IDLmloptAdam()

Result = Optimizer(LearningRate, Beta1, Beta2)

Arguments


LearningRate

Specify the initial learning rate value to use as a starting point. Note that if this parameter is too small, it may lead to slow convergence, and if it is too large, it can miss the optimal solution.

Beta1

Specify how far back in the past to look to predict the step size for the future. This is similar to the Gamma argument in IDLmloptRMSProp. A good starting value is 0.9 (the default).

Beta2

Specify how far back in the past to look to predict the step size for the future. A good starting value is 0.999 (the default).

Keywords


None

Version History


8.7.1

Introduced

See Also


IDLmloptGradientDescent, IDLmloptMomentum, IDLmloptQuickProp, IDLmloptRMSProp