X
21449 Rate this article:
4.2

Workflow Tools in ENVI

Introduction

The growing reliance on geospatial imagery makes it increasingly important for users to get critical information from imagery. Tools and processes that help easily and accurately extract information are essential, whether information is needed for intelligence, scientific or planning purposes. Today’s imagery analysts and scientists in a wide variety of disciplines choose ENVI, the premier software solution for extracting information from geospatial imagery. ENVI provides accurate, scientifically-based procedures that follow the most recent advances in image processing technology, and is well known as a leader in spectral image analysis.

Now, ENVI provides the tools you need to make these processes and procedures easy, regardless of your image processing experience or professional background, including automated workflows and wizard-based processes that walk you through previously complex tasks, step-by-step.

A major focus of recent and future ENVI development is providing solution-based tools that are built as workflows, which provide step-by-step processes and instructions to help you get the information you need from your imagery quickly and easily. Users of all ability levels, from novice to advanced, can learn the workflow tools and quickly produce results. The workflow tools have been optimized for a variety of data, and results can be sent to Geographic Information System (GIS) databases, Google Earth, or pushed directly to ArcGIS for map and report creation.

In the latest version of ENVI, 4.6, you’ll find even more automated workflow tools to support a variety of popular image processing tasks, whether you’re using panchromatic, multispectral, or hyperspectral data. These efficient workflow tools include the ENVI Feature Extraction Module, the SPEAR Tools, and the Target Detection Wizard.

ENVI Feature Extraction Module

The ENVI Feature Extraction Module (ENVI Fx), a recently introduced add-on module to ENVI, allows you to quickly and easily extract features from geospatial imagery. Today’s GIS, Imagery analysts, and Image Scientists increasingly need to find and identify features of interest within geospatial imagery. However, the process of manually locating and digitizing these features is often complex, time consuming, and spectral content is often limited, reducing the accuracy of standard classification methods.

ENVI Fx can be used to extract a wide variety of features such as vehicles, buildings, roads, bridges, rivers, lakes, and fields, and is optimized for extracting information from high-resolution panchromatic and multispectral imagery based on spatial, spectral, and texture characteristics. Additional datasets, such as a raster LiDAR elevation dataset, can be added to the workflow to enhance results. It is also designed in a user-friendly and reproducible fashion so you can spend less time understanding processing details and more time interpreting results.

The ENVI Feature Extraction Module is ideal for:

  • Finding and counting particular features across large images
  • Adding new vector layers to geodatabases
  • Classifying images as outputs to be used in reports or analyses
  • Replacing or accelerating manual digitization processes

Traditional remote sensing classification techniques and approaches to feature extraction are pixel-based, using the spectra of each pixel to classify imagery. The accuracy of this approach typically depends upon the availability of a large number of spectral bands, limiting the types of imagery that can be used. With popular high-resolution panchromatic or multispectral imagery that is commonly in use today, an object-based method offers more flexibility in the types of features to be extracted.

The ENVI Feature Extraction Module uses an object-based approach to identify and define features, allowing users to get accurate results even with limited bands. An object is a region of interest with spatial, spectral (brightness and color), and/or texture characteristics that define the region. This method makes it an accurate tool for use with all kinds of imagery and data types, particularly pan and multi-spectral imagery.

The workflow of extracting features is the combined process of segmenting an image into regions of pixels, computing attributes for each region to create objects, and then classifying the objects (with rule-based or supervised classification) based on those attributes, to extract features (Figure 1). The workflow is designed to be helpful and intuitive, while allowing users to customize it to specific applications.

Figure 1

The workflow consists of two primary steps: Find Objects and Extract Features. The Find Objects task consists of segmentation and object generation (Figure 2). When these steps are completed, users may optionally export the segments as vectors or perform an image classification. The image classification component consists of supervised or rule-based classification prior to exporting results to shapefiles and/ or raster images.

Figure 2

One of the most exciting and innovative aspects of ENVI Fx is the ability to preview results at each step of the workflow. Simply fine-tune a few simple parameters and a preview portal is available to show on-the-fly results of the parameter adjustment. The portal window can be resized and moved around the image, to make sure that the features of interest are being located in all areas of the scene (Figure 3).

This provides a significant time savings over traditional feature extraction methods. Users will be able to segment an image and quickly view the results to assess the accuracy of the segmentation and classification, rather than having to wait for the full image to be processed. The segmentation scale can be quickly adjusted and previewed as many times as needed prior to full image processing. Once accurate parameters have been established, the process can be automatically repeated on a collection of imagery.

Figure 3

The final steps of the workflow include classifying the objects. Users may interactively choose training data from the image, import ground truth data, or create rules to define the classes (Figure 4).

Figure 4

In summary, the ENVI Feature Extraction Module will allow analysts to extract features from pan and multi-spectral imagery - the most widely available imagery today. In addition, because it is an add-on module to the ENVI product, users can perform all feature extraction, image processing, analysis and visualization tasks in a single product.

SPEAR Automated Workflow Tools

A unique set of automated capabilities in ENVI now allow you to perform spectral image processing quickly and easily. The ENVI Spectral Processing Exploitation and Analysis Resource (SPEAR) Tools, which are included with the core ENVI software, automate many common image processing tasks into workflows. Designed with the imagery analyst and geospatial analyst in mind, these unique wizards greatly reduce the time and effort required to pan-sharpen, detect change, categorize terrain, and more. The tools include step-by-step instructions, with integrated help and information dialogues to walk you through producing imagery products quickly and easily (Figure 5).

Figure 5

The SPEAR tools include fifteen different workflows, all optimized for multispectral imagery. Advanced or novice users can quickly and accurately create results and reports with little or no additional training to use the workflow tools. By leveraging the powerful and proven tools already in ENVI within task specific work- flows, SPEAR provides users with a simple and reliable way to generate results, streamlining the user’s workflow and increasing efficiency.

The Following Workflows Are Available:

Anomaly Detection

Anomaly detection provides a way to search for anything that is spectrally different from the background (spectral anomalies)(Figure 6). ENVI utilizes the Reed-Xiaoli Detector (RXD) algorithm to detect and extract targets that are statistically distinct from the image background. The workflow incorporates options to subset the image, perform anomaly thresholding, and the ability to review the anomalies for acceptance or rejection of false positives.

Figure 6

Change Detection

The SPEAR Change Detection workflow provides a way to compare imagery collected over the same area at different times and to highlight features that have changed (Figure 7). The SPEAR Change Detection tool performs the following types of relative change detection:

  • Transform: Input datasets are stacked into one image cube, and then an image transform (principle component analysis, minimum noise fraction, or independent components analysis) is applied in order to extract the feature correlating to change.
  • Subtractive: The Normalized Difference Vegetation Index (NDVI), red/blue ratio, and man-made ratio are calculated for the input datasets. The resulting ratios and input bands are subtracted from the input images to create difference images.
  • Two Color Multi-View (2CMV): One band from the Time 1 image is displayed in the red band, and the same band in the Time 2 image is displayed in the green and blue bands. Objects that display a difference that is brighter from one image to the other appear in cyan. Objects that display a difference that is darker from one image to the other appear in red. The colors can then be used to indicate potential areas of change.

Figure 7

Google Earth Bridge

The SPEAR Google Earth Bridge provides a simple way to export images and/or vectors from ENVI to Google Earth (Figure 8). Google Earth (available for free at +) is a powerful data visualization tool which allows imagery to be placed in a regional or global context. The Google Earth Bridge creates a Keyhole Markup Language (.kml) file containing vector data (including image footprints). If image thumbnails are selected, ancillary data files associated with the .kml file will also be exported. Users can export vectors, images (as thumbnails), and footprints to Google Earth using the SPEAR Google Earth Bridge.Figure 8

Image-to-Map Registration

The SPEAR Image-to-Map Registration workflow tool provides a simplified way to warp an input image to match the map information of a base image, while retaining the original spatial resolution. Once an image has been registered, information and features can be extracted and visualized in GIS systems and compared to imagery of the same area collected at different times.

Independent Componenets Analysis

The SPEAR Independent Components (IC) analysis transforms an input dataset into a new dataset containing new bands comprised of a linear combination of the input bands. IC transforms a set of mixed, random signals into components that are mutually independent. The benefit is that IC can distinguish features of interest even when they occupy only a small portion of the pixels in the image. This workflow is optimized for multispectral or hyperspectral, and serves as a tool for blind source separation, where no prior information on the mixing is available.

Lines of Communication

The SPEAR Lines of Communication workflow tools streamlines spectral processing for mapping roads and open and obscured waterway lines of communication (LOCs) using multispectral data (Figure 9). This tool is particularly helpful for highlighting areas to aid manual digitization and map generation.

Figure 9

Metadata Browser

The SPEAR Metadata Browser extracts key metadata from National Imagery Transmission Format(NITF)images and displays it in a simple-to-read format. Additionally, this tool provides a way to compare multiple images for use in change detection, and to view a 3D graphic of the sensor’s and sun’s geometry at the time of collection (Figure 10). The Metadata Browser allows users to simultaneously view the metadata from multiple images to assess exploitation suitability.

Figure 10

Terrain-based Orthorectification

The SPEAR Terrain-based Orthorectification Wizard allows you to orthorectify images using rational polynomial coefficients (RPCs), elevation and geoid information, and optional ground control points (GCPs). An orthorectified image (ororthophoto) is one where each pixel represents a true ground location and all geometric, terrain, and sensor distortions have been removed to within a specified accuracy. Orthorectification transforms the central perspective of an aerial photograph or satellite-derived image to an orthogonal view of the ground, which removes the effects of sensor tilt and terrain relief. Geospatial professionals can easily combine orthophotos with other spatial data in a Geographic Information System (GIS) for city planning, resource management, and other related applications.

Pan Sharpening

The SPEAR Pan Sharpening workflow takes panchromatic images and fuses them with multispectral data to create colorized high resolution images (Figure 11). The pan-sharpened product has the benefits of the spectral information from the multispectral data and the spatial information from the panchromatic data.

Figure 11

Relative Water Depth

The SPEAR Relative Water Depth tool enables users to quickly generate a product depicting relative water depths. The water depth results are relative since they do not depict absolute depths(the results are scaled from zero to one), and the intent of these results is to provide a general feel for the bathymetry of the area. The images of relative water depths can be generated and exported to graphic files for use in reports or briefings.

Spectral Analogues

The SPEAR Spectral Analogues workflow finds all instances of a specific material based on its spectral response. The Spectral Analogues uses spectral matching algorithms that compare the spectrum of each pixel to the mean spectrum of user-specified training pixels. The Spectral Analogues tool is designed for use with multispectral data.

TERCAT (Terrain Categorization)

The SPEAR TERCAT Workflow tool produces a terrain classification by automatically grouping pixels with similar spectral properties into classes (Figure 12). These classes may be either user-defined or can be automatically generated by the classification algorithm. The TERCAT tool provides all of the standard ENVI classification algorithms, plus an additional algorithm called Winner Takes All. This is a voting method that classifies pixels based on the majority compiled from all of the other classification methods that were conducted.

Figure 12

Vegetation Delineation and Stress Detection

The SPEAR Vegetation Delineation workflow enables you to quickly identify the presence of vegetation and to visualize its level of vigor (Figure 13). This workflow also provides tools to facilitate creating graphics for use within reports and briefings.

Figure 13

Vertical Stripe Removal

Images in homogeneous areas may display vertical striping artifacts where pixel brightness varies relative to other nearby pixels. The striping artifacts make images difficult to visualize and they may negatively impact image processing algorithms. The SPEAR Vertical Stripe Removal workflow can remove these striping artifacts, and is best used when the image background is relatively homogenous (consistent brightness level throughout image).

Watercraft Finder

The SPEAR Watercraft Finder workflow streamlines processing to detect the presence of watercraft in open water environments using high-resolution multispectral data (Figure 14). The tool is also applicable for detecting watercraft in littoral zones. The premise of the tool is that watercraft, which reflects in the near infrared wavelengths, will appear as anomalous clusters of pixels in near infrared absorbing water. Several algorithms within the tool make use of this premise to aid users in rapidly detecting watercraft.

Figure 14

In summary, whether the users are new to analyzing imagery or if there is a need to increase the efficiency of an image processing and analysis workflow, users will find the new SPEAR tools highly valuable.

Target Detection Wizard

Targets of interest in an image are not always visible to the naked eye. Now, with the ENVI Target Detection Wizard, users can quickly find the targets of interest, regardless of their image processing experience. This new guided workflow combines six advanced algorithms that have been highly requested from customers.

The wizard guides the user step-by-step through the target detection process and produces highly accurate results. The workflow has been optimized for hyperspectral or multispectral images. The targets may be a material or mineral of interest (such as Alunite) or they may be man-made objects (such as vehicles). If the users knows that the image contains at least one target of interest, the wizard can be used to find other targets like it in the same image. The workflow can also be accessed programmatically, so the user can customize options if needed.

The Wizard guides the user through the following target detection steps (Figure 15):

  1. 1. Select Input/Output Files: Select the input image and the root name for output images.
  1. 2. Perform Atmospheric Correction: Optionally convert the image into reflectance through atmospheric correction.
  1. 3. Select Target Spectra: Select one or more target spectra for the analysis from spectral libraries, individual spectral plots, text files, Regions of Interest (ROIs), or statistics files.
  1. 4. Select Non-Target Spectra: Optionally select one or more spectra to suppress from processing.
  1. 5. Apply MNF Transform: Optionally apply the Minimum Noise Fraction (MNF)transform before target detection analysis.
  1. 6. Select Target Detection Methods: Choose up to eight target detection methods for processing.
  1. 7. Load Rule Images and Preview: Load the output rule images and preview the binary results.
  1. 8. Filter Targets: Select from different filter options and parameters for each target to clean up mis-detected pixels and false positives.
  1. 9. Export Results: Export results to one shapefile and/or ROI per target detection method.
  1. 10. View Statistics and Report: View target detection statistics for each selected method and a report of settings used in the Wizard.

Figure 15

The following target detection methods are available in the workflow (Figure 16):

  • Matched Filtering (MF): This algorithm finds the abundance of targets using a partial unmixing algorithm. This technique maximizes the response of the known spectra and suppresses the response of the composite unknown background, therefore matching the known signature. It provides a rapid means of detecting specific materials based on matches to target spectra and does not require knowledge of all the endmembers within an image scene.

  • Constrained Energy Minimization (CEM): Similar to ACE and MF, CEM does not require knowledge of all the endmembers within a scene. Using a specific constraint, CEM uses a finite impulse response (FIR) filter to pass through the desired target while minimizing its output energy resulting from backgrounds other than the desired targets. A correlation or covariance matrix is used to characterize the composite unknown background. In a mathematical sense, MF is a mean-centered version of CEM, where the data mean is subtracted from all pixel vectors.

  • Adaptive Coherence Estimator (ACE): Derived from th Generalized Likelihood Ratio (GLR) approach, ACE is invariant to relative scaling of input spectra and has a Constant False Alarm Rate (CFAR) with respect to such scaling. As with CEM and MF, ACE does not require knowledge of all the endmembers within a scene.

  • Spectral Angle Mapper (SAM): Matches image spectra to referencetarget spectra in n dimensions. SAM compares the angle between the target spectrum (considered an n-dimensional vector, where n is the number of bands) and each pixel vector in n-dimensional space. Smaller angles represent closer matches to the reference spectrum. When used on calibrated data, this technique is relatively insensitive to illumination and albedo effects.

  • Orthogonal Subspace Projection (OSP): OSP first designs an orthogonal subspace projector to eliminate the response of non-targets, and then applies MF to match the desired target from the data. OSP is efficient and effective when target signatures are distinct. When the spectral angle between the target signature and the non-target signature is small, the attenuation of the target signal is dramatic and the performance of OSP could be poor.

  • Target-Constrained Interference-Minimized Filter (TCIMF): TCIMF detects the desired targets, eliminates non-targets, and minimizes interfering effects in one operation. TCIMF is constrained to eliminate the response of non-targets rather than minimizing their energy, just like other interferences in CEM. Previous studies show that if the spectral angle between the target and the non-target is significant, TCIMF can potentially reduce the number of false positives over CEM results.

  • Mixture Tuned Target-Constrained Interference-Minimized Filter (MT TCIMF): This method combines the Mixture Tuned technique and TCIMF target detector. It uses a Minimum Noise Fraction (MNF) transform input file to perform TCIMF, and it adds an infeasibility image to the results. The infeasibility image is used to reduce the number of false positives that are sometimes found when using TCIMF alone. The output of MTTCIMF is a set of rule images corresponding to TCIMF scores and a set of images corresponding to infeasibility values. The infeasibility results are in noise sigma units and indicate the feasibility of the TCIMF result. Correctly mapped pixels have a high TCIMF score and a low infeasibility value. If non-target spectra were specified, MTTCIMF can potentially reduce the number of false positives over MTMF.

  • Mixture Tuned Matched Filtering (MTMF): MTMF uses an MNF transform input file to perform MF, and it adds an infeasibility image to the results. The infeasibility image is used to reduce the number of false positives that are sometimes found when using MF alone. Pixels with a high infeasibility are likely to be MF false positives. Correctly mapped pixels will have an MF score above the background distribution around zero and a low infeasibility value. The infeasibility values are in noise sigma units that vary in DN scale with an MF score.

Figure 16

Once the methods are selected, the wizard runs all of the algorithms against the target information at once, simplifying the steps necessary for the user to obtain results. The Target Detection Wizard also increases the efficiency in a user’s workflow, by allowing the user to preview the results from the various detection methods (Figure 17).

Figure 17

After selecting the desired results, the user can apply some cleanup methods, such as clumping and filtering of results. The final step in the workflow allows the user to cycle through each found target for acceptance or rejection, thus eliminating false positives.

Conclusion

NV5 Geospatial recognizes the growing importance of workflow-based tools that offer quick and accurate results. Workflow tools with customers’ solutions and products in mind increases the effectiveness of the software and the efficiency of the operation. ENVI currently provides a number of these tools that can help users of all backgrounds integrate imagery into their daily workflows. Moreover, development plans for ENVI include many new workflow tools that will make image processing even easier.

For more information contact your representative at: GeospatialInfo@NV5.com.