The IDLmlFeedForwardNeuralNetwork class implements a Neural Network model that can be used for classification purposes.
Example
read_seeds_example_data, data, labels, $
N_ATTRIBUTES=nAttributes, N_EXAMPLES=nExamples, $
N_LABELS=nLabels, UNIQUE_LABELS=uniqueLabels
IDLmlShuffle, data, labels
Normalizer = IDLmlVarianceNormalizer(data)
Normalizer.Normalize, data
Part = IDLmlPartition({train:80, test:20}, data, labels)
Classifier = IDLmlFeedForwardNeuralNetwork([nAttributes, 5, $
nLabels], uniqueLabels)
Optimizer = IDLmloptAdam(0.1)
For i=0, 100 do loss = Classifier.Train(Part.train.data, $
LABELS=Part.train.labels, $
OPTIMIZER=optimizer)
confMatrix= IDLmlTestClassifier(Classifier, Part.test.data, $
Part.test.labels, ACCURACY=accuracy)
Print, 'Model accuracy:', accuracy
Print, Classifier.Classify(data[*,0])
Syntax
Result = IDLmlFeedForwardNeuralNetwork(LayerSizes, Outputs [, Keywords=Value] [, Properties=value])
Arguments
LayerSizes
Specify the array of layer sizes that define the neural network. The first element of the 1D array must be equal to the number of attributes. The last element must be equal to the number of possible outputs. If present, the intermediate number between the first and last element will define the size of the intermediate layers. The number of layers will be equal to the number of elements on this array, minus one.
Outputs
Specify the array of possible outputs. It can be an array of numbers or strings. Outputs can also be a scalar number; in that case, the possible output values will be all integer numbers from 0 to 'outputs' minus one.
Keywords
ACTIVATION_FUNCTIONS (optional)
Set this keyword to the activation function objects (IDLmlac*) for each layer. If specified, the number of elements of this 1D array must be equal to the number of layers defined by the LayerSizes array, minus one. The default value is to always use IDLmlafLogistic activation functions, except for the last layer, which uses IDLmlSoftmax.
LAMBDA (optional)
Specify the contribution of L2 regularization in the loss function. By default, this is 0. Increasing this argument may help with model overfitting issues.
LOSS_FUNCTION (optional)
Specify the loss function object (IDLmllf*) for the model. This function determines what is the function that must be minimized during training. The default value is to always use IDLmllfMeanSquaredError.
OUTPUT_LAYER (optional)
Set this keyword to the index of the layer to use as output of the neural network. By default, the last layer will be used as output.
SEED (optional)
If repeatability is desired (such as for testing), set this keyword to the seed variable used to randomly initialize the weights.
Properties
ACTIVATION_FUNCTIONS
An array of activation functions that define the neural network.
CLASS_MAP
A hash that maps internal classification values to desired labels, if the model was defined using custom labels.
LAYER_SIZES
An array of layer sizes that define the neural network.
NATTRIBUTES
The number of input attributes the model requires.
NLAYERS
The number of layers of the neural network.
NOUTPUTS
The number of possible outputs.
OUTPUT_LAYER
The index of the layer used as output layer.
OUTPUTS
An array of possible outputs.
WEIGHTS
The current weight values that make up the neural network. Weights represent the strength of connection between the network units. Initially, they are a set of random numbers, but as the model is trained the weights will change, reflecting a model that is better trained.
Methods
IDLmlFeedForwardNeuralNetwork::Classify
The IDLmlFeedForwardNeuralNetwork::Classify method evaluates a number of features and returns an array of scores that represent how closely each feature matches each value.
Syntax
Result = Obj->[IDLmlFeedForwardNeuralNetwork::]Classify(Features [, Keywords=Value])
Return Value
The method returns an array of class values that correspond to the data provided.
Arguments
Features
Specify an array of features of size n x m, where n is the number of attributes and m is the number of examples.
Keywords
LOSS (optional)
Set this keyword to a variable that will contain the loss result, which is a unitless number that indicates how closely the model fits the training data. Loss is defined as the total error computed by the loss function specified in the LOSS_FUNCTION parameter when creating the model, which defaults to Mean Squared Error if not specified.
SCORES (optional)
Set this keyword to an array of size m, where m is the number of examples containing the actual scores associated with the features. Use this keyword to pass in the actual scores associated with the features if you want to calculate the loss.
UNMAPPED_CLASSES (optional)
Set this keyword to a variable that will contain the actual internal class values for the classification results.
IDLmlFeedForwardNeuralNetwork::Evaluate
The IDLmlFeedForwardNeuralNetwork::Evaluate method evaluates a number of features and returns an array of scores that represent how closely each feature matches each value.
Syntax
Result = Obj->[IDLmlFeedForwardNeuralNetwork::]Evaluate(Features [, Keywords=Value])
Return Value
This method returns the scores associated with the features. Scores represent the actual numerical outputs obtained by the model in response to a number of inputs.
Arguments
Features
Specify an array of features of size n x m, where n is the number of attributes and m is the number of examples.
Keywords
LOSS (optional)
Set this keyword to a variable that will contain the loss result, which is a unitless number that indicates how closely the model fits the training data. Loss is defined as the total error computed by the loss function specified in the LOSS_FUNCTION parameter when creating the model, which defaults to Mean Squared Error if not specified.
SCORES (optional)
Set this keyword to an array of size m, where m is the number of examples containing the actual scores associated with the features. Use this keyword to pass in the actual scores associated with the features if you want to calculate the loss.
IDLmlFeedForwardNeuralNetwork::Restore
The IDLmlFeedForwardNeuralNetwork::Restore static method restores the model from a file.
Syntax
Result = IDLmlFeedForwardNeuralNetwork.Restore(Filename)
Return Value
A reference to the object instance restored from the file.
Arguments
Filename
Specify the name of the file to restore.
Keywords
None
IDLmlFeedForwardNeuralNetwork::Save
The IDLmlFeedForwardNeuralNetwork::Save method saves the model to a file.
Syntax
Obj->[IDLmlFeedForwardNeuralNetwork::]Save, Filename
Arguments
Filename
Specify the name of the saved file.
Keywords
None
IDLmlFeedForwardNeuralNetwork::Train
The IDLmlFeedForwardNeuralNetwork::Train method performs training on the model and returns the current weights of the neural network model. Training is an iterative process and it can take tens or hundreds of calls to the Train method until the model becomes fully trained. Check the loss returned by this method on each iteration, once it converges to a low and stable value, you will know that the model has been trained.
Syntax
Result = Obj->[IDLmlFeedForwardNeuralNetwork::]Train(Features [, Keywords=Value])
Return Value
This method returns the loss, which is a unitless number that indicates how closely the model fits the training data. Loss is defined as the total error computed by the loss function specified in the LOSS_FUNCTION parameter when creating the model, which defaults to Mean Squared Error if not specified.
Arguments
Features
Specify an array of features of size n x m, where n is the number of attributes and m is the number of examples.
Scores
An array of size m, where m is the number of examples, containing the actual scores associated with the features.
Keywords
CALLBACK_FUNCTION (optional)
An optional string with the name of an IDL function to be called on each training iteration. The callback function must accept two arguments: loss and state. The callback function must return 1 (!true) if the training should perform another iteration, or 0 (!false) if it should stop training.
LABELS (optional)
An array of size m, where m is the number of examples containing the actual labels associated with the features.
OPTIMIZER (optional)
Specify the optimizer object (IDLmlopt*) used in training. The optimizer will adjust the learning process to try to converge to a solution.
SCORES (optional)
An array of size m, where m is the number of examples containing the actual scores associated with the features.
TRAIN_STATE (optional)
Specify optional user data to provide for the callback function.
Version History
See Also
IDLmlAutoEncoder, IDLmlKMeans, IDLmlSoftmax, IDLmlSupportVectorMachineClassification, IDLmlSupportVectorMachineRegression, IDLmlTestClassifier