Optimization algorithms are used by neural networks to help minimize an error function by modifying the model’s internal learnable parameters. The IDLmloptQuickProp optimizer follows a quadratic approximation of the previous gradient step and the current gradient, which is expected to be close to the minimum of the loss function.
Optimizer = IDLmloptQuickProp(0.1)
Print, Optimizer(0.1, 0.1, 0.1)
Note: Though the above is how an optimizer can be used as a standalone object, the common scenario is to pass it to the IDLmlFeedForwardNeuralNetwork::Train() method via the OPTIMIZER keyword.
Optimizer = IDLmloptQuickProp()
Result = Optimizer(LearningRate)
Specify the initial learning rate value to use as a starting point. Note that if this parameter is too small, it may lead to slow convergence, and if it is too large, it can miss the optimal solution.
IDLmloptAdam, IDLmloptGradientDescent, IDLmloptMomentum, IDLmloptRMSProp