The IDLmlTestClassifier function computes a confusion matrix and other metrics that indicate how well a model trained as a classifier performed against the test data.

## Example

read_seeds_example_data, data, labels, $

N_ATTRIBUTES=nAttributes, N_EXAMPLES=nExamples, $

N_LABELS=nLabels, UNIQUE_LABELS=uniqueLabels

`IDLmlShuffle, data, labels`

`Normalizer = IDLmlVarianceNormalizer(data)`

Normalizer.Normalize, data

Part = IDLmlPartition({train:80, test:20}, data, labels)

`Classifier = IDLmlSoftmax(nAttributes, uniqueLabels)`

For i=0, 100 do loss = Classifier.Train(part.train['data'], $

` LABELS=part.train['labels'])`

confMatrix= IDLmlTestClassifier(Classifier, part.test['data'], $

` part.test['labels'], ACCURACY=accuracy)`

Print, 'Model accuracy:', accuracy

## Syntax

*Result* = IDLmlTestClassifier(*Model*, *Features*, *Labels* [, Keywords=*Value*])

## Arguments

### Model

An object that inherits IDLmlModel and has been trained to classify data.

### Features

Specify an array of features of size *n* x *m*, where *n* is the number of attributes and *m* is the number of examples.

### Labels

An array of size *m*, where *m* is the number of examples containing the actual labels associated with the features.

## Keywords

### ACCURACY (optional)

Set this keyword to a variable that will contain the overall accuracy of the confusion matrix. The overall accuracy is calculated by summing the number of correctly classified values and dividing by the total number of values. The correctly classified values are located along the upper-left to lower-right diagonal of the confusion matrix. The total number of values is the number of values in either the truth or predicted-value arrays.

### COLUMN_TOTALS (optional)

Set this keyword to a variable that will contain the column totals in the confusion matrix, which corresponds to the number of truth values in each class.

### COMMISSION (optional)

Set this keyword to a variable that will contain the errors of commission from the confusion matrix. Errors of commission represent the fraction of values that were predicted to be in a class but do not belong to that class. They are a measure of false positives. Errors of commission are shown in the rows of the confusion matrix, except for the values along the diagonal. The result is an array with one value per class.

### F1 (optional)

Set this keyword to a variable that will contain the F1 score, which is the harmonic mean of the Precision and Recall values.

### KAPPA (optional)

Set this keyword to a variable that will contain the kappa coefficient. The kappa coefficient measures the agreement between classification and truth values. A kappa value of 1 represents perfect agreement, while a value of 0 represents no agreement.

### OMISSION (optional)

Set this keyword to a variable that will contain the errors of omission. Errors of omission represent the fraction of values that belong to a class but were predicted to be in a different class. They are a measure of false negatives. Errors of omission are shown in the columns of the confusion matrix, except for the values along the main diagonal. The result is an array with one value per class.

### PRECISION (optional)

Set this keyword to a variable that will contain the precision value, representing the probability that a value predicted to be in a certain class really is that class. The probability is based on the fraction of correctly predicted values to the total number of values predicted to be in a class. The result is an array with one value per class.

### RECALL (optional)

Set this keyword to a variable that will contain the recall value from the confusion matrix, as an array with one value per class.

### ROW_TOTALS (optional)

Set this keyword to an array of size *m*, where *m* is the number of examples containing the actual scores associated with the features. Use this keyword to pass the actual scores associated with the features if you want to calculate the loss.

## Version History

8.7.1 |
Introduced |