This task uses a trained Deep Learning ONNX model to perform inference on a raster in regions that contain features of interest identified by a grid model (patent pending). The result is a classification image and a grid output vector. An optional output is a class activation raster whose pixel values represent the probability (0 to 1) of matching the feature of interest.

Example


; Start the application
e = ENVI()
 
; Open a raster for classification
; Update the following line with a valid raster path
RasterURI = 'RasterToClassify.dat'
Raster = e.OpenRaster(RasterURI)
 
 
; Select a trained pixel segmentation model
; Update the following line with a valid pixel segmentation model
PixelModelURI = 'pixelModel.envi.onnx'
PixelModel = ENVIDeepLearningOnnxModel(PixelModelURI)
 
; Select a trained Grid model
; Update the following line with a valid grid model
GridModelURI = 'gridModel.envi.onnx'
GridModel = ENVIDeepLearningOnnxModel(GridModelURI)
 
; Get the task from the catalog of ENVITasks
Task = ENVITask('DeepLearningOptimizedPixelClassification')
 
; Select task inputs
Task.INPUT_RASTER = Raster
Task.INPUT_PIXEL_MODEL = PixelModel
Task.INPUT_GRID_MODEL = GridModel
; Update based on model accuracy
Task.CONFIDENCE_THRESHOLD = 0.8
 
; Set task outputs
Task.OUTPUT_CLASSIFICATION_RASTER_URI = e.GetTemporaryFilename('.dat', /CLEANUP_ON_EXIT)
Task.OUTPUT_CLASS_ACTIVATION_RASTER_URI = e.GetTemporaryFilename('.dat', /CLEANUP_ON_EXIT)
Task.OUTPUT_VECTOR_URI = e.GetTemporaryFilename('.shp', /CLEANUP_ON_EXIT)
 
; Run the task
Task.Execute
 
; Add the output to the Data Manager
e.Data.Add, Task.OUTPUT_CLASSIFICATION_RASTER
e.Data.Add, Task.OUTPUT_CLASS_ACTIVATION_RASTER
e.Data.Add, Task.OUTPUT_VECTOR
 
; Display the result
View = e.GetView()
Layer1 = View.CreateLayer(Raster)
Layer2 = View.CreateLayer(Task.OUTPUT_CLASSIFICATION_RASTER)
Layer3 = View.CreateLayer(Task.OUTPUT_CLASS_ACTIVATION_RASTER)
Layer4 = View.CreateLayer(Task.OUTPUT_VECTOR)

Syntax


Result = ENVITask('DeepLearningOptimizedPixelClassification')

Input parameters (Set, Get): CONFIDENCE_THRESHOLD, CUDA_DEVICE_ID, ENHANCE_DISPLAY, INPUT_GRID_MODEL, INPUT_METADATA, INPUT_PIXEL_MODEL, INPUT_RASTER, OUTPUT_CLASS_ACTIVATION_RASTER_URI, OUTPUT_CLASSIFICATION_RASTER_URI, OUTPUT_VECTOR_URI, RUNTIME, VISUAL_RGB

Output parameters (Get only): OUTPUT_CLASS_ACTIVATION_RASTER, OUTPUT_CLASSIFICATION_RASTER, OUTPUT_VECTOR

Properties marked as "Set" are those that you can set to specific values. You can also retrieve their current values any time. Properties marked as "Get" are those whose values you can retrieve but not set.

Input Parameters


CONFIDENCE_THRESHOLD (optional)

Specify a floating-point threshold value between 0 and 1.0. Bounding boxes with a confidence score less than this value will be discarded before applying the IOU_THRESHOLD. The default value is 0.2. Decreasing this value generally results in more classification bounding boxes throughout the scene. Increasing it results in fewer classification bounding boxes.

CUDA_DEVICE_ID (optional)

If the RUNTIME parameter is set to CUDA, specify the target GPU device ID. If a valid ID is provided, the classification task will execute on the specified CUDA-enabled GPU. If the ID is omitted or invalid, the system defaults to GPU device 0. Use this parameter to explicitly control GPU selection in multi-GPU environments.

ENHANCE_DISPLAY (optional)

Specify whether to apply an additional small stretch to the processed data to suppress noise and enhance feature visibility. The optional stretch is effective for improving visual clarity in imagery acquired from aerial platforms or sensors with higher noise profiles.

INPUT_GRID_MODEL (required)

Specify the trained ONNX model (.envi.onnx) that was designed for grid-based analysis to classify the INPUT_RASTER.

INPUT_METADATA (optional)

Specify an optional hash containing metadata that will be passed on and accessible to ONNX preprocessor and postprocessor functions.

INPUT_PIXEL_MODEL (required)

Specify the trained ONNX model (.envi.onnx) to use for pixel-level classification in the grid-detected cells.

INPUT_RASTER (required)

Specify the raster to classify.

OUTPUT_CLASS_ACTIVATION_RASTER_URI (optional)

Specify a string with the fully qualified filename and path of the associated OUTPUT_CLASS_ACTIVATION_RASTER. If you do not set this parameter, the class activation raster will not be created. You must set this parameter or the OUTPUT_CLASSIFICATION_URI parameter. You can also set both.

OUTPUT_CLASSIFICATION_RASTER_URI (optional)

Specify a string with the fully qualified filename and path of the associated OUTPUT_CLASSIFICATION_RASTER. If you do not set this parameter, the classification raster will not be created. You must set this parameter or the OUTPUT_CLASS_ACTIVATION_URI parameter. You can also set both.

OUTPUT_VECTOR_URI (optional)

Specify a string with the fully qualified path and filename for OUTPUT_VECTOR.

RUNTIME (optional)

Specify the execution environment for the classification task with one of these options:

  • CUDA: (Default) Uses NVIDIA GPU acceleration for optimal performance and faster processing. See also CUDA_DEVICE_ID for details on providing a device ID.
  • CPU: Ensures compatibility on systems without GPU support, but with reduced processing speeds.

VISUAL_RGB (optional)

Specify whether to encode the output raster as a three-band RGB composite (red, green, blue) for color image processing. This ensures consistent band selection from ENVI display types (such as RGB, CIR, and pan) and supports integration of diverse data sources (such as MSI, panchromatic, and VNIR) without band mismatch.

Output Parameters


OUTPUT_CLASS_ACTIVATION_RASTER

This is a reference to the output class activation raster of filetype ENVI. It is a float raster with one band for each class, including the background class, whose values range from 0 to 1.

OUTPUT_CLASSIFICATION_RASTER

This is a reference to the output classification raster of filetype ENVI. It is a single-band byte raster whose values range from 0 to the number of classes.

OUTPUT_VECTOR

This is a reference to the output vector.

Methods


Execute

Parameter

ParameterNames

See ENVI Help for details on these ENVITask methods.

Properties


DESCRIPTION

DISPLAY_NAME

NAME

REVISION

TAGS

See the ENVITask topic in ENVI Help for details.

Version History


Deep Learning 3.0

Introduced

Deep Learning 4.0

Renamed from TensorFlowOptimizedPixelClassification task.

Added parameters: CUDA_DEVICE_ID, ENHANCE_DISPLAY, INPUT_METADATA, RUNTIME, and VISUAL_RGB.

See Also


DeepLearningPixelClassification Task , TrainDeepLearningGridModel Task, TrainDeepLearningPixelModel Task