Use the Target Detection Workflow to locate objects within hyperspectral or multispectral images that match the signatures of in-scene regions. The targets may be a material or mineral of interest, or man-made objects.

See the following for help on a particular step of the workflow:

Workflow Tips


This workflow is not “modal,” meaning it will not block you from using other ENVI tools or working with additional data. This is useful in that the workflow will not prevent you from doing multiple things at a time. However, be aware that if you close all of your files in the middle of the workflow process, you might not be able to continue the workflow and will need to start over.

Navigating Workflow Steps

The number of steps provided in the workflow will depend on the input image data. For example, not all images will contain the data needed for every step; therefore, some steps will be skipped automatically.

Some steps can be optional; in those cases, the Perform this step radio button is selected by default. To skip that step and go to the next step in the workflow, select the Skip this step radio button, then click Next.

The timeline at the bottom of the workflow will display the order of steps available for the workflow and your data, and the title of your current location in the workflow will flash. The title is also an active link that you can click, to jump backward or forward to a desired step in the workflow.

Preview/Display Result

Some workflow steps provide options to preview the settings and/or to display the processed result.

  • Enable the Preview check box to see a preview of the settings before you click OK and process the data. The preview is calculated only on the area in the view and uses the resolution level at which you are viewing the image. See Preview for details on the results. To preview a different area in your image, pan and zoom to the area of interest and re-enable the Preview option.
  • Enable the Display result check box to display the raster in the view when processing is complete.

Open Workflow in Modeler

On the last step of the workflow, the Open Workflow in Modeler link will take your full workflow - the exact data, choices, and parameter values that you selected - and create a Model that can be manipulated in the ENVI Modeler. For example, you could create a Model to perform batch processing with multiple similar input datasets.

Select Data


  1. From the Toolbox, select Workflows > Target Detection Workflow. The Select Data panel appears.
  2. Select an input file and perform optional spatial and spectral subsetting and/or masking, then click OK.
  3. Click Next.

Select Signatures


The Select Signatures panel appears after you select the input raster for target detection.

To perform target detection using the Mixture Tuned Target Constrained Interference Minimized Filter, Orthogonal Subspace Projection, or Target Constrained Interference Minimized Filter methods, you need to select more than one target spectra, or provide background spectra besides the target spectrum.

  1. Right-click on one of the following, then select the spectral signatures to import:

    • Target: Uses target signatures from a selected spectral library to find targets. Target spectral signatures are required for all target detection algorithms.

    • Background: Uses background signatures to find targets. Background spectral signatures are optional for the Mixture Tuned Target Constrained Interference Minimized Filter, Orthogonal Subspace Projection, and Target Constrained Interference Minimized Filter target detection methods, but they are not useful for other methods.

  2. Click Next.

Image Transform for Dimensionality Reduction


If you selected hyperspectral data for the input file, the Image Transform for Dimensionality Reduction panel appears (otherwise, for non-hyperspectral data the Target Detection panel appears).

Hyperspectral data is largely redundant, with data from one band to the next generally changing very little. The Target Detection Workflow provides the optional step of dimensionality reduction for reducing redundant spectral data, which can shorten processing time and improve detection accuracy. Dimensionality reduction generally refers to applying a mathematical transformation to the input dataset to create a new dataset in which the output bands are a linear combination of every input band. The leading bands in the transformed data generally contain the unique content in the image, while the latter bands contain mostly noise and otherwise redundant information.

  1. Select one of the following from the drop-down list:

    • Forward Independent Component Analysis: Performs an independent component analysis (ICA) procedure to transform a set of mixed, random signals into components that are mutually independent.

    • Forward Minimum Noise Fraction: Performs a minimum noise fraction (MNF) transform to determine the inherent dimensionality of image data, to segregate noise in the data, and to reduce the computational requirements for subsequent processing.

    • Forward Principal Component Analysis: Performs a principal components analysis (PCA) transform to produce uncorrelated output bands, to segregate noise components, and to reduce the dimensionality of data sets.

  2. If you selected Forward Minimum Noise Fraction or Forward Principal Component Analysis, click Next. If you selected Forward Independent Component Analysis, you can optionally adjust additional settings in the fields described below.

  3. Enter a Sampling Percent between 0.0 and 100.0.

  4. Enter a maximum number of iterations between 100 and 32,767 in the Max. Iterations field.

  5. In the Max. Stabilization Iterations field, enter a value that is 32,767 or lower.

  6. From the Contrast Function drop-down list, select one of the following:

    • LogCosh

    • Kurtosis

    • Gaussian

  7. If you selected Gaussian or LogCosh as the Contrast Function, enter a coefficient value.

  8. Enter a value between 1.0-e-08 to 0.1 in the Change Threshold field.

  9. Select Yes or No to Sort Output ICA bands by decreasing spatial coherence.

  10. Click Next.

Dimensionality Reduction


After you apply the transform, the Explained Variance plot and the Dimensionality Reduction panel appear.

The Explained Variance plot shows the sorted eigenvalue (or spatial coherence) contribution percentage of each band after image transform. Use the plot for guidance in choosing number of bands to keep. For example, if you have a 200-band image and the plot indicates contributions from the first 14 bands contribute over 90% of the total variance. If this amount is suitable, you can use 14 as the number of bands to keep. The analysis after this step will use the first 14 bands of transformed image instead of all 200 bands, reducing dimensionality from 200 to 14.

  1. In the Dimensionality Reduction panel, enter the Reduced Number of Bands.

  2. Click Next.

Target Detection


  1. In the Target Detection panel, select a target detection method from the drop-down list. The following algorithms are available:

    • Constrained Energy Minimization: Similar to Adaptive Coherence Estimator and Matched Filter, this algorithm does not require knowledge of all the endmembers within a scene. Using a specific constraint, it uses a finite impulse response (FIR) filter to pass through the desired target while minimizing its output energy resulting from backgrounds other than the desired targets. A correlation or covariance matrix is used to characterize the composite unknown background. In a mathematical sense, MF is a mean-centered version of CEM, where the data mean is subtracted from all pixel vectors.

    • Matched Filter: Finds the abundance of targets using a partial unmixing algorithm. This technique maximizes the response of the known spectra and suppresses the response of the composite unknown background, therefore matching the known signature. It provides a rapid means of detecting specific materials based on matches to target spectra and does not require knowledge of all the endmembers within an image scene.

    • Mixture Tuned Matched Filter: MTMF uses an MNF transform input file to perform MF, and it adds an infeasibility image to the results. The infeasibility image is used to reduce the number of false positives that are sometimes found when using MF alone. Pixels with a high infeasibility are likely to be MF false positives. Correctly mapped pixels will have an MF score above the background distribution around zero and a low infeasibility value. The infeasibility values are in noise sigma units that vary in DN scale with an MF score. This method is only available if you applied the Forward MNF transform in the Image Transform for Dimensionality Reduction step.

    • Mixture Tuned Target Constrained Interference Minimized Filter: MTTCIMF combines the Mixture Tuned technique and TCIMF target detector. It uses a Minimum Noise Fraction (MNF) transform input file to perform TCIMF, and it adds an infeasibility image to the results. The infeasibility image is used to reduce the number of false positives that are sometimes found when using TCIMF alone. The output of MTTCIMF is a set of rule images corresponding to TCIMF scores and a set of images corresponding to infeasibility values. The infeasibility results are in noise sigma units and indicate the feasibility of the TCIMF result. Correctly mapped pixels have a high TCIMF score and a low infeasibility value. If non-target spectra were specified, MTTCIMF can potentially reduce the number of false positives over MTMF. This method is only available if you provided more than one target spectra or provided background spectra along with the target spectrum in the Select Signatures step, and you also applied the Forward MNF transform in the Image Transform for Dimensionality Reduction step.

    • Normalized Euclidean Distance Classification: NED is a physically-based spectral classification that calculates the distance between two vectors in the same manner as a Euclidean Distance method, but it normalizes the vectors first by dividing each vector by its mean.

    • Orthogonal Subspace Projection: OSP first designs an orthogonal subspace projector to eliminate the response of non-targets, then applies MF to match the desired target from the data. OSP is efficient and effective when target signatures are distinct. When the spectral angle between the target signature and the non-target signature is small, the attenuation of the target signal is dramatic and the performance of OSP could be poor. This method is only available if you provided more than one target spectra, or you provided background spectra along with the target spectrum in the Select Signatures step.

    • Adaptive Coherence Estimator: Derived from the Generalized Likelihood Ratio (GLR) approach, this method is invariant to relative scaling of input spectra and has a Constant False Alarm Rate (CFAR) with respect to such scaling. As with CEM and MF, ACE does not require knowledge of all the endmembers within a scene.

    • Spectral Angle Mapper Classification: SAM matches image spectra to reference target spectra in n dimensions. It compares the angle between the target spectrum (considered an n-dimensional vector, where n is the number of bands) and each pixel vector in n-dimensional space. Smaller angles represent closer matches to the reference spectrum. When used on calibrated data, this technique is relatively insensitive to illumination and albedo effects.

      Note: Lower values in SAM rule images represent closer matches to the target spectrum and higher probability of being a target.

    • Spectral Information Divergence Classification: Performs the Spectral Information Divergence (SID) classification.

    • Spectral Similarity Mapper Classification: SSM supervised classification is a physically-based spectral classification that combines elements of both the Spectral Angle Mapper and Minimum Distance classifier methods into a single measure.

    • Target Constrained Interference Minimized Filter: TCIMF detects the desired targets, eliminates non-targets, and minimizes interfering effects in one operation. TCIMF is constrained to eliminate the response of non-targets rather than minimizing their energy, just like other interferences in CEM. Previous studies show that if the spectral angle between the target and the non-target is significant, TCIMF can potentially reduce the number of false positives over CEM results. This method is only available if you provided more than one target spectra, or you provided non-target spectra in the Select Signatures step.

  2. If you selected one of these methods, click Next:

    • Orthogonal Subspace Projection

    • Adaptive Coherence Estimator

    • Spectral Similarity Mapper Classification

    • Target Constrained Interference Minimized Filter as the method, click Next.

    If you selected one of these methods, set the two parameters described below, then click Next:

    • Constrained Energy Minimization

    • Matched Filter

    • Normalized Euclidean Distance Classification

    • Spectral Angle Mapper

      Spectral Information Divergence Classification, set the next two parameters.

  3. Select Yes or No to use Use Subspace Background in the statistics calculation.

  4. If you enabled Use Subspace Background, specify the Background Threshold to use for calculating the subspace background statistics. This value indicates the fraction of the background in the anomalous image to use. The allowable range is 0.500 to 1.000 (the entire image).

  5. Click Next.

Threshold


The Threshold panel appears, with a suggested threshold value applied to the image. The value shows the data range of the target detection calculation result.

  1. To change the threshold value, drag the bar, or type the desired value in the text box. Regions in the image that are black will be masked as areas of no interest.
  2. Click Next.

Smooth


The Smooth panel appears. Use this panel to set parameters for aggregating and smoothing on a class raster.

  1. In the Minimum Pixels field, set the aggregate minimum size for pixels. Regions this size or larger will be aggregated into a larger, adjacent region.

  2. Set the smooth Kernel Size, using an odd number.

  3. Click Next.

Export Results


Select the products to create in the Export Results panel.

  1. Enable check boxes for the desired exports. A Filename field will appear for the exports you enable; enter a filename and location for each. The options are:

    • Export Classification Raster

    • Export Shapefile

    • Export ROIs

    • Export GeoJSON

    • Export KML

    • Export Rule Raster

  2. Click Finish.

See Also


Anomaly, Change, and Target Detection