This task uses a trained Deep Learning ONNX model to perform object detection classification on a raster. The output is a shapefile of bounding boxes for each class.

Example


; Start the application
e = ENVI()
 
; Select a raster to classify.
; Update the following line with a valid raster path
RasterFile = "RasterToClassify.dat"
Raster = e.OpenRaster(RasterFile)
 
; Select a trained model.
; Update the following line with a valid model path
ModelFile = 'objectModel.envi.onnx'
Model = ENVIDeepLearningOnnxModel(ModelFile)
 
; Get the task from the catalog of ENVITasks
Task = ENVITask('DeepLearningObjectClassification')
 
; Select task inputs
Task.INPUT_RASTER = Raster
Task.INPUT_MODEL = Model
 
; Adjust based on model accuracy
Task.CONFIDENCE_THRESHOLD = 0.7
 
; Run the task
Task.Execute
 
; Get the classification vector output
Result = Task.OUTPUT_VECTOR
 
; Add data to the Data Manager
e.Data.Add, Raster
e.Data.Add, Result
 
; Access the view
View = e.GetView()
 
; Create the layers
Layer1 = View.CreateLayer(Raster)
Layer2 = View.CreateLayer(Result)

Syntax


Result = ENVITask('DeepLearningObjectClassification')

Input parameters (Set, Get): CLASS_FILTER, CONFIDENCE_THRESHOLD, CUDA_DEVICE_ID, ENHANCE_DISPLAY, INPUT_METADATA, INPUT_MODEL, INPUT_RASTER, IOU_THRESHOLD, OUTPUT_VECTOR_URI, RUNTIME, VISUAL_RGB

Output parameters (Get only): OUTPUT_VECTOR

Properties marked as "Set" are those that you can set to specific values. You can also retrieve their current values any time. Properties marked as "Get" are those whose values you can retrieve but not set.

Input Parameters


CLASS_FILTER (optional)

Specify the class labels to exclude from the output classification results. This will filter out the specified labels to provide a more targeted and customized output.

CONFIDENCE_THRESHOLD (optional)

Specify a floating-point threshold value between 0 and 1.0. Bounding boxes with a confidence score less than this value will be discarded before applying the IOU_THRESHOLD. The default value is 0.2. Decreasing this value generally results in more classification bounding boxes throughout the scene. Increasing it results in fewer classification bounding boxes.

CUDA_DEVICE_ID (optional)

If the RUNTIME parameter is set to CUDA, specify the target GPU device ID. If a valid ID is provided, the classification task will execute on the specified CUDA-enabled GPU. If the ID is omitted or invalid, the system defaults to GPU device 0. Use this parameter to explicitly control GPU selection in multi-GPU environments.

ENHANCE_DISPLAY (optional)

Specify whether to apply an additional small stretch to the processed data to suppress noise and enhance feature visibility. The optional stretch is effective for improving visual clarity in imagery acquired from aerial platforms or sensors with higher noise profiles.

INPUT_METADATA (optional)

Specify an optional hash containing metadata that will be passed on and accessible to ONNX preprocessor and postprocessor functions.

INPUT_RASTER (required)

Specify the raster to classify.

INPUT_MODEL (required)

Specify the trained ONNX model (.envi.onnx) to use for object detection classification of the INPUT_RASTER.

IOU_THRESHOLD (optional)

Specify a floating-point value between 0 and 1.0, indicating the Non-Maximum Suppression Intersection over Union (NMS IOU) value. This is a Deep Learning object detection parameter that reduces detection clustering by pruning predicted bounding boxes that have high IOU overlap with previously selected boxes. The default value is 0.5. Increasing this value results in more bounding boxes around identified features. Decreasing the value results in fewer bounding boxes.

OUTPUT_VECTOR_URI (optional)

Specify a string with the fully qualified filename and path for OUTPUT_VECTOR.

RUNTIME (optional)

Specify the execution environment for the classification task with one of these options:

  • CUDA: (Default) Uses NVIDIA GPU acceleration for optimal performance and faster processing. See also CUDA_DEVICE_ID for details on providing a device ID.
  • CPU: Ensures compatibility on systems without GPU support, but with reduced processing speeds.

VISUAL_RGB (optional)

Specify whether to encode the output raster as a three-band RGB composite (red, green, blue) for color image processing. This ensures consistent band selection from ENVI display types (such as RGB, CIR, and pan) and supports integration of diverse data sources (such as MSI, panchromatic, and VNIR) without band mismatch.

Output Parameters


OUTPUT_VECTOR

This is the output shapefile with the classified features.

Methods


Execute

Parameter

ParameterNames

See ENVI Help for details on these ENVITask methods.

Properties


DESCRIPTION

DISPLAY_NAME

NAME

REVISION

TAGS

See the ENVITask topic in ENVI Help for details.

Version History


Deep Learning 1.2

Introduced

Deep Learning 4.0

Renamed from TensorFlowObjectClassification task.

Added parameters: CLASS_FILTER, CUDA_DEVICE_ID, ENHANCE_DISPLAY, INPUT_METADATA, RUNTIME, and VISUAL_RGB.

See Also


TrainDeepLearningObjectModel Task, ENVIDeepLearningObjectDetectionRaster, BuildObjectDetectionRasterFromAnnotation Task, DeepLearningGridClassification Task, DeepLearningPixelClassification Task, DeepLearningOptimizedObjectClassification Task, BuildDeepLearningRaster Task