Optimization algorithms are used by neural networks to help minimize an error function by modifying the model’s internal learnable parameters. Compared to IDLmloptGradientDescent, the IDLmloptMomentum optimizer attempts to accelerate learning by accumulating a moving average of past gradients, and it uses that information to determine where to go next.
Example
Compile_opt idl2
Optimizer = IDLmloptMomentum(0.1, 0.1)
Print, Optimizer(0.1, 0.1, 0.1)
Note: Though the above is how an optimizer can be used as a standalone object, the common scenario is to pass it to the IDLmlFeedForwardNeuralNetwork::Train() method via the OPTIMIZER keyword.
Syntax
Optimizer = IDLmloptMomentum()
Result = Optimizer(LearningRate, Mass)
Arguments
LearningRate
Specify the initial learning rate value to use as a starting point. Note that if this parameter is too small, it may lead to slow convergence, and if it is too large, it can miss the optimal solution.
Mass
Specify how much weight to give to the previous learning step. The default value is 0.9.
Keywords
None
Version History
See Also
IDLmloptAdam, IDLmloptGradientDescent, IDLmloptQuickProp, IDLmloptRMSProp