Optimization algorithms are used by neural networks to help minimize an error function by modifying the model’s internal learnable parameters. Compared to IDLmloptGradientDescent, the IDLmloptRMSProp optimizer attempts to accelerate learning by adapting the step of each dimension to how much of the error it contributes to. The Gamma argument tells the system how far back to look when estimating the error.

Example


Compile_opt idl2
Optimizer = IDLmloptRMSProp(0.1, 0.1)
Print, Optimizer(0.1, 0.1, 0.1)

Note: Though the above is how an optimizer can be used as a standalone object, the common scenario is to pass it to the IDLmlFeedForwardNeuralNetwork::Train() method via the OPTIMIZER keyword.

Syntax


Optimizer = IDLmloptRMSProp()

Result = Optimizer(LearningRate, Gamma)

Arguments


LearningRate

Specify the initial learning rate value to use as a starting point. Note that if this parameter is too small, it may lead to slow convergence, and if it is too large, it can miss the optimal solution.

Gamma

Specify how far back the system should look when estimating the error. A value of 0.9 (the default value) means that the error from the last step contributed 90% of the information about where to take the next step.

Keywords


None

Version History


8.7.1

Introduced

See Also


IDLmloptAdam, IDLmloptGradientDescent, IDLmloptMomentum, IDLmloptQuickProp