Tutorial: Feature Extraction with Rule-Based Classification Tutorial

Rule classification is the process of taking a segmented image and grouping similar pixel clusters into groups called classes. A class contains one or more rules that you can build based on your knowledge of certain features. Each rule contains one or more attributes such as area, length, or texture, which you constrain to a specific range of values. For example, we know that roads are elongated, some buildings approximate a rectangular shape, and trees are highly textured compared to grass.

You must have an ENVI Feature Extraction license in order to use this tool.

See the following sections:

From the Toolbox, select Feature Extraction > Rule Based Feature Extraction Workflow. The Data Selection panel appears.

Select Input Files for Feature Extraction


For best results with Feature Extraction, consider preprocessing your imagery to reduce noisy or redundant data, to correct for atmospheric effects, or to suppress vegetation. See Preprocess Imagery for available options.

For hyperspectral imagery, we strongly recommend that you run a principal components analysis or independent components analysis on the dataset before using it in Feature Extraction. Segmentation and merging work best on datasets with only a few bands.

Also consider reducing the spatial resolution of your input image to speed up processing and to remove small, unwanted features. For example, you can down-sample a 10,000 by 10,000 pixel image by a factor of 10 to yield a 1,000 by 1,000 pixel image.

  1. Click Browse and select a panchromatic or multispectral image for input using the Data Selection dialog. Feature Extraction accepts any image format listed in Supported Data Types.
  2. To apply a mask, select the Input Mask tab in the File Selection panel. In addition to the mask, any pixel values specified in the Data Ignore Value field of the associated header (for ENVI-format files) will be treated as mask values and will not be processed by Feature Extraction.
  3. You can import ancillary data to help extract features of interest. An example is combining a LiDAR digital surface model (DSM) with a multispectral image to identify rooftops in a residential area, then building a rule using height data from the DSM to more accurately extract the rooftops. (The height data would be in the Spectral Mean attribute for the DSM band.) Multiple datasets often provide more accurate results. The following rules apply:
    • You can only use raster data for ancillary data. Vector data must be converted to raster format prior to import.
    • The ancillary image file must be georeferenced to a standard or Rational Polynomial Coefficient (RPC) spatial reference. If the ancillary data is not in the same map projection as the input image, ENVI will reproject the ancillary data to match the base projection. Images with a pseudo spatial reference cannot be used as ancillary data.
    • The ancillary data and input image must have some geographic overlap.
    • If you spatially subset the input image, the ancillary data will be reprojected to match that spatial extent.
  4. Select the Ancillary Data tab and click Add Data. Select one or more ancillary files for input. You can select spectral subsets from each ancillary data file. ENVI will create new bands for each ancillary file that you import; you can then use these bands for rule-based classification. The ancillary bands are identified by the name of the ancillary file and the respective band number of that file.

  5. Select the Custom Bands tab and enable the following options if desired. The input image must be georeferenced to a standard map projection for these options to be available.
    • Normalized Difference: Select two bands for computing a normalized band ratio as follows:
    • [(b2 - b1) / (b2 + b1 + eps)]

      Where "eps" is a very small number to avoid division by zero.

      If b2 is near-infrared and b1 is red, then Normalized Difference will be a measure of normalized difference vegetation index (NDVI).

      For example, if you have a QuickBird image with four bands where Band 3 is red and Band 4 is near-infrared and you want to compute NDVI, select Band 3 from the Band 1 drop-down list. Select Band 4 from the Band 2 drop-down list.

      ENVI will create a "Normalized Difference" band that you can use for segmentation or classification.

    • Color Space: Select the Red, Green, and Blue band names from the image. ENVI will perform an RGB to HSI color space transformation and will create new bands for Hue, Saturation, and Intensity that you can use for segmentation or rule-based classification.
    • Hue: Often used as a color filter, measured in degrees from 0 to 360. A value of 0 is red, 120 is green, and 240 is blue.

      Saturation: Often used as a color filter, measured in floating-point values that range from 0 to 1.0.

      Intensity: Often provides a better measure of brightness than the Spectral_Mean spectral attributes. Intensity is measured in floating-point values that range from 0 to 1.0.

  6. Note: You should not perform segmentation with a combination of custom bands (normalized difference or HSI color space) and visible/NIR bands. You can perform segmentation on the normalized difference or color space bands by themselves, but not in combination with visible and NIR bands.

  7. Click Next.

ENVI will create a single dataset from the combined bands of the input image, ancillary data, normalized difference, hue, saturation, and intensity (if selected). This single dataset will be used throughout the rest of the Feature Extraction workflow.

When file selection is complete, the file opens in a new workflow view. If the selected file is displayed in an active view before you start the workflow, the display bands and image location are retained, as well as any brightness, contrast, stretch, and sharpen settings. The image location is not retained for pixel-based images or those with pseudo or arbitrary projections.

Segment Images


Segmentation is the process of partitioning an image into objects by grouping neighboring pixels with common values. The objects in the image ideally correspond to real-world features. Effective segmentation ensures that classification results are more accurate.

  1. Enable the Preview option in the Object Creation panel. A Preview Window appears with segments outlined in green.
  2. Under Segment Settings, select an Algorithm from the drop-down list provided. The following options are available:
    • Edge: Best for detecting edges of features where objects of interest have sharp edges. Set an appropriate Scale Level and Merge Level (see steps below) to effectively delineate features.
    • Intensity: Best for segmenting images with subtle gradients such as digital elevation models (DEMs) or images of electromagnetic fields. When selecting this method, don't perform any merging; set the Merge Level to 0. Merging is used primarily to combine segments with similar spectral information. Elevation and other related attributes are not appropriate for merging.
  3. See Watershed Algorithm Background for more detailed descriptions of each option.

  4. Adjust the Scale Level slider as needed to effectively delineate the boundaries of features as much as possible without over-segmenting the features. Increasing the slider results in fewer segments; decreasing the slider results in more segments. You should also ensure that features of interest are not grouped into segments represented by other features. See Watershed Algorithm Background for a more detailed discussion of how the Scale Level is used with respect to gradient and intensity images.
  5. Click the Select Segment Bands button to choose specific bands for applying the segmentation settings. The settings will apply to a grayscale image derived from the average of all selected bands. All available bands are selected by default.
  6. Tip: For best segmentation results, select a combination of bands that have similar spectral ranges such as R, G, B, and NIR bands. You should not perform segmentation with a combination of custom bands (normalized difference or HSI color space) and visible/NIR bands. You can perform segmentation on the normalized difference or color space bands by themselves, but not in combination with visible and NIR bands.

  7. Merging combines adjacent segments with similar spectral attributes. Under Merge Settings, select an Algorithm from the drop-down list provided. The following options are available:
    • Full Lambda Schedule: (default). Merges small segments within larger, textured areas such as trees or clouds, where over-segmentation may be a problem.
    • Fast Lambda: Merges adjacent segments with similar colors and border sizes.
  8. See Merge Algorithms Background for more detailed descriptions of each option.

  9. Adjust the Merge Level slider as needed to combine segments with similar colors (Fast Lambda) or to merge over-segmented areas (Full Lambda Schedule). Increasing the slider results in more merging; no merging will occur if you leave the slider value at 0. For example, if a red building consists of three segments, selecting Fast Lambda and increasing the Merge Level should combine them into one segment. All available bands are selected by default. To delineate treetops or other highly textured features, select Full Lambda Schedule and increase the Merge Level value.
  10. Click the Select Merge Bands button to choose specific bands for applying the merge settings. Merging will be based on the differences between region colors based on all selected bands.
  11. Select a Texture Kernel Size value, which is the size (in pixels) of a moving box centered over each pixel in the image. Texture attributes are computed for each kernel. Enter an odd number of 3 or higher. The maximum value is 19. The default value is 3. Select a higher kernel size if you are segmenting large areas with little texture variance such as fields. Select a lower kernel size if you are segmenting smaller areas with higher variance such as urban neighborhoods.
  12. Click Next. ENVI loads a segmentation image into the display.

Build New Rules


Each class will contain one or more rules (which contain a combination of various attributes) that best define the class. For example, if you want to extract roads from an image, you could build a rule that will use a combination of Elongation, Length, and Area spatial attributes:

The rule logic for this class is as follows:

To meet the conditions of the Roads class, either one of the two rules must apply (i.e., the OR operation). The first rule has only one attribute definition. To satisfy the conditions of the second rule, both the Length and Area attribute definitions must apply (i.e., the AND operator). Each rule is equally weighted, which works well in most cases.

However, if you discover an attribute that effectively identifies the feature of interest, you could override the default weight and promote that rule above other rules. See Rule Classification Background for details.

If you are building a rule set for the first time, the Rule Classification panel appears with an empty folder called All Classes. This folder will contain all of the feature types (classes) that you define.

  1. Click the Add Class button. A new class appears with one undefined rule and one default attribute.
  2. Select the new class, and edit the name and color as needed within the Class Properties table. The Class Threshold property is an advanced setting; the default value is acceptable in most cases; see Rule Classification Background for details.
  3. Select the new rule name that was created for you (Rule [1.0]), and add attributes to the rule as described in Select Attributes.
  4. To add another rule to a class, select the class name and click the Add Rule to Class button. An OR logical operation is used to combine multiple rules within a class. To remove a rule from a given class, select the rule name and click the Remove Rule button.
  5. To add a new class, select All Classes and click the Add Class button.
  6. To save your rule set to disk (as a .rul file), click the Save Rule Set button. Click the Restore Rule Set File button to later restore the file.
  7. Note: You cannot restore rule set files in XML format that were created in ENVI EX 4.8 and earlier.

Select Attributes


ENVI computes various attributes for each segment in the segmentation image after you click Next in the Segment and Merge panel. See Spatial Attributes, Spectral Attributes, and Texture Attributes for definitions of each attribute.

Each rule can have multiple attributes; an AND logical operator is used to combine multiple attributes within a rule.

  1. Select the default attribute that was added for you (Spectral Mean(0) > 0). You will change this to an attribute of your choice.
  2. The right side of the panel displays several options that let you edit the attribute values.
    • From the Type drop-down list, select Spatial, Spectral, or Texture. If you select Spectral or Texture, select the Band number that the spectral attribute will apply to.
    • From the Name drop-down list, select an attribute name.
    • The Weight, Tolerance, and Algorithm fields under the Advanced tab are advanced settings that are used in assigning pixels to classes. See Rule Classification Background for further details.

    Tip: If you don't yet know which attributes would best identify your feature of interest, experiment with different attributes and enable the Show Attribute Image option each time, to view an image of attribute values in a Preview Window. Use the Cursor Value tool (available from the main toolbar) to examine the attribute values of your features. You can quickly tell if your feature has high values for that attribute (bright pixels) or low values (dark pixels). This will help you decide the approximate range of attribute values that best identify that feature.

  3. A histogram of attribute values is displayed in the Rule Classification panel. Click the Dock/Undock Histogram Window button to display the histogram in a separate dialog that you can resize to better view the histogram. This option is only available for Windows users.
  4. To constrain the attribute values to a subset of values, click and drag the left and right vertical bars on the histogram to define a range of values. Or, enter values in the text fields provided. The range of values included in the attribute calculation is shown with a white background. Click Invert to select the range of values outside of the current range. The following examples illustrate different subsets of values:

    Examples of histograms

Preview Classification Results


  • Select a class and click Preview to display a Preview Window that shows a class confidence image based on your current classes and settings. The class confidence image shows the likelihood of a class meeting the requirements of the rules that you have defined. The brightest regions represent objects with the highest confidence of meeting the rule requirements, while the darkest regions represent the lowest confidence.
  • Select All Classes and click Preview to show a classification image of all the classes you have defined in your image.
  • Select a rule and click Preview to show a rule confidence image. This type of image shows the relative confidence of each object (in the segmentation image) satisfying the conditions of the rule. The higher the brightness of an object, the higher the confidence that the object satisfies the conditions of the rule. If an object is very dark, it likely does not meet the rule requirements.
  • Select an attribute name and click Preview to show an attribute score image for that attribute. The attribute score image gives you a visual guide of how well objects meet the attribute criteria that you set with the histogram.
  • If you select the Binary classification algorithm under the Advanced tab, objects have pixel values of 0 (no match) or 1 (match). If you select the Linear or Quadratic classification algorithm, objects range in value from 0 to 1, indicating how well they meet the attribute criteria. As you use the histogram to adjust the range of attribute values, the attribute score image automatically includes or excludes objects from consideration.

  • Click Show Attribute Image to show an image of attribute values in the Preview Window, rather than an attribute score image. Use the Cursor Value tool to view the pixel values of the attribute image. You can adjust the image transparency to view the underlying image, if needed, using the Transparency slider on the main toolbar.

If either the image lines or samples is greater than 1024 pixels and you want to zoom out of the data, you cannot zoom out further than 50% because it will significantly increase processing time and delay the previewed data from displaying. Zooming out further than 50% will result in a black Preview Window.

Export Classification Results


In this step, you will select the types of images, vectors, and statistics to export to various formats. By default, files are saved to the directory that you specify in the Output Directory preference. When the export is complete, the workflow view closes. The original data and the export data display in the Image window view. The available output options vary, depending on your workflow.

Rule-Based and Example-Based Classification

Choose the classification file types you want to save.

Export Vector Tab

  • Export Classification Vectors: Save all classes to a single shapefile with a filename of <prefix>_vectors.shp. Shapefiles have a maximum file size of 2 GB, so if the image has a large number of classes and the resulting shapefile exceeds this file size, it will be broken into smaller shapefiles. Shapefiles larger than 1.5 GB cannot be displayed. To avoid these issues when working with large datasets, you can save the vectors to a geodatabase.
  • Merge Adjacent Features: Select this option if you are confident that adjacent polygons belong to the same class and you want to consolidate them into a single polygon. This results in a smaller file size. This option merges all adjacent segments at once across the entire image; you cannot select specific polygons to merge.
  • Tip: A good example is an angled rooftop that reveals different brightness levels from an aerial sensor, depending on the sun's angle. The segmentation step would typically create multiple regions within the rooftop, each with different spectral values. But if you build a good rule set that identifies rooftops, the classification image (and/or shapefile) will assign these regions to the same class. Since you know that they all belong to one rooftop, you can choose to merge the adjacent segments so that the entire rooftop is one polygon.

  • Export Attributes: Select this option if you want to include the spatial, spectral, and texture attributes that were computed for each region in the output shapefile.

Export Raster Tab

  • Export Classification Image: Create an image in ENVI raster format whose pixel values represent different classes. The filename will be <prefix>_class.dat.
  • Export Segmentation Image: Create a multispectral image in ENVI raster format that shows the regions defined by segmentation; each region is assigned the mean spectral values of all the pixels that belong to that region. The filename will be <prefix>_segmentation.dat.

Advanced Export Tab

  • Export Attributes Image: Create a multi-layer image in ENVI raster format where each layer represents the values of a specific attribute. The filename will be <prefix>_attributes.dat. When you select this option, a new dialog appears that lets you select which attributes to export to an attribute image. The Selected Attributes column is initially populated with the attributes you used in your rule set. If an attribute does not contain any valid values, then that attribute band will be assigned pixel values of 0. See Selecting Attributes for Classification for further details on using this dialog.
  • Export Confidence Image: Create an image in ENVI raster format that shows the relative confidence of each object belonging to a class. The higher the brightness of an object, the higher the confidence that the object belongs to the class. If an object is very dark, it likely does not belong to the class. This is a multi-layer file, with each layer representing one class. The filename will be <prefix>_confidence.dat.

Auxiliary Export Tab

  • Export Feature Examples: Example-based classification only. Save the training samples that you selected for all classes to an XML file named <prefix>_trainingset.ftr.
  • Export Feature Ruleset: Rule-based classification only. Save the rules that you defined to a file named <prefix>_ruleset.rul.
  • Export Processing Report: Create a text report that summarizes the segmentation options, rules, and attributes that you used to classify the image. The filename will be <prefix>_report.txt.

Segment-Only

Export Vector Tab

  • Export Segmentation Vectors: Save the regions in the segmentation image to a shapefile. Shapefiles have a maximum file size of 2 GB, so if your image has a large number of segments and the resulting shapefile exceeds this file size, it will be broken into smaller shapefiles. Shapefiles larger than 1.5 GB cannot be displayed.
  • Export Attributes: Select this option if you want to include the spatial, spectral, and texture attributes that were computed for each region in the output shapefile.

Export Raster Tab

  • Export Segmentation Image: Create a multispectral image in ENVI raster format that shows the regions defined by segmentation; each region is assigned the mean spectral values of all the pixels that belong to that region.

Advanced Export Tab

  • Export Attributes Image: Create a multi-layer image in ENVI raster format where each layer represents the values of a specific attribute. When you select this option, a new dialog appears that lets you select which attributes to export to an attribute image. The Selected Attributes column is initially empty. Select the attribute name(s) that you want to export in the Available Attributes column, and click the Add Selected Attribute button. If an attribute does not contain any valid values, then that attribute band will be assigned pixel values of 0. See Selecting Attributes for Classification for further details on using this dialog.

Auxiliary Export Tab

  • Export Processing Report: Create a report that summarizes the segmentation options, rules, and attributes that you used to classify the image.