Optimization algorithms are used by neural networks to help minimize an error function by modifying the model’s internal learnable parameters. Compared to IDLmloptGradientDescent, the IDLmloptRMSProp optimizer attempts to accelerate learning by adapting the step of each dimension to how much of the error it contributes to. The Gamma argument tells the system how far back to look when estimating the error.


Compile_opt idl2
Optimizer = IDLmloptRMSProp(0.1, 0.1)
Print, Optimizer(0.1, 0.1, 0.1)

Note: Though the above is how an optimizer can be used as a standalone object, the common scenario is to pass it to the IDLmlFeedForwardNeuralNetwork::Train() method via the OPTIMIZER keyword.


Optimizer = IDLmloptRMSProp()

Result = Optimizer(LearningRate, Gamma)



Specify the initial learning rate value to use as a starting point. Note that if this parameter is too small, it may lead to slow convergence, and if it is too large, it can miss the optimal solution.


Specify how far back the system should look when estimating the error. A value of 0.9 (the default value) means that the error from the last step contributed 90% of the information about where to take the next step.



Version History



See Also

IDLmloptAdam, IDLmloptGradientDescent, IDLmloptMomentum, IDLmloptQuickProp