X
2482

ENVI Deep Learning 1.2 Release Notes

Refer to the Deep Learning Help for instructions on using the tools and API. Access the help by selecting Help > Contents from the ENVI menu bar. Then click "ENVI Deep Learning" in the table of contents on the left side of the help page.

System Requirements

ENVI Deep Learning 1.2 uses TensorFlow 2.4 and CUDA 11, both of which are included in the installation. System requirements are as follows:

  • Base software: ENVI 5.6.1 and the ENVI Deep Learning 1.2 module
  • Operating systems:
    • Windows 10 (Intel/AMD 64-bit)
    • Linux (Intel/AMD 64-bit, kernel 3.10.0 or higher, glibc 2.17 or higher)
  • Hardware:
    • NVIDIA graphics card with CUDA Compute Capability version 3.5 to 8.6. See the list of CUDA-enabled GPU cards. A minimum of 8 GB of GPU memory is recommended for optimal performance, particularly when training deep learning models.
    • NVIDIA GPU driver version 460.x or higher
    • A CPU with the Advanced Vector Extensions (AVX) instruction set. In general, any CPU after 2011 will contain this instruction set.

To determine if your system meets the requirements for ENVI Deep Learning, start the Deep Learning Guide Map in the ENVI Toolbox. From the Deep Learning Guide Map menu bar, select Tools > Test Installation and Configuration.

What's New

ENVI Deep Learning 1.2 provides the capability to train and classify object detection models. Object detection can be used to locate features with similar spatial, spectral, and textural characteristics. This is different than previous versions of ENVI Deep Learning that located features only on a pixel-by-pixel basis (referred to as pixel segmentation). While that method is still available, object detection is new to this release. Object detection can be used to extract objects that touch or overlap, unlike pixel segmentation. ENVI uses the RetinaNet convolutional neural network (CNN) for object detection.

The following image shows an example of using object detection to locate ships. Red bounding boxes mark the locations of ships. Click on the thumbnail to see a larger image.

Object detection involves these steps:

  1. Label features in one or more images by drawing bounding boxes around them. Bounding boxes can be in the form of rectangle annotations or polygon regions of interest (ROIs).
  2. Pass the labeled images (called object detection rasters) to a trainer. The result is a trained model in HDF5 format.
  3. Classify the same or other rasters using the trained model. The result is a set of bounding boxes drawn around identified features. Each box belongs to a particular class.

A number of changes were implemented to support object detection:

  • The Deep Learning Guide Map provides a guided path to label, train, and classify object detection models.
  • The Deep Learning Labeling Tool has a new Project Type drop-down list that lets you set up an Object Detection or Pixel Segmentation project. If you select Object Detection, the Labeling Tool lets you draw rectangle annotations around features of interest. You can also import existing annotation files for labeling. The Labeling Tool automatically creates object detection rasters when you are finished with labeling. You can then proceed with training within the Labeling Tool.
  • Mask-based pixel segmentation models (ENVINet5 and ENVINetMulti) had to be explicitly initialized before training could occur, except when using the Labeling Tool to train the models. With object detection, model initialization occurs automatically without any user input.

New standalone tools are also available for object detection. Access them from the ENVI Toolbox or the Tools menu in the Guide Map:

  • Build Object Detection Raster From Annotation
  • Build Object Detection Raster From ROI
  • Postprocess Classification Vector
  • Train TensorFlow Object Model
  • TensorFlow Object Classification
  • View Object Detection Raster Labels

See the following topics in ENVI Deep Learning Help to learn more about object detection:

  • Object Detection
  • Object Detection Tutorial
  • Annotate Images for Object Detection

Programming

The following objects are new:

  • ENVIBoundingBoxSet: Add and manage bounding box information in object detection rasters, instead of drawing rectangle annotations on training rasters.
  • ENVIDeepLearningGeoJSONToROI: Convert GeoJSON code with bounding box information to polygon regions of interest (ROIs).
  • ENVIDeepLearningROIToGeoJSON: Convert polygon ROIs with bounding box information to GeoJSON code.
  • ENVIDeepLearningObjectDetectionRaster: Construct a lightweight ENVIDeepLearningRaster subclass from a file that can be used with ENVITasks in ENVI Deep Learning. It contains additional GeoJSON information about bounding boxes, stored in the metadata. The bounding boxes are used for object detection.
  • ENVITensorFlowObjectModel: Create a TensorFlow object detection model.

The following tasks are new:

  • BuildObjectDetectionRasterFromAnnotation: Build an object detection raster from an input raster and an annotation file indicating features of interest.
  • BuildObjectDetectionRasterFromROI: Build an object detection raster from an input raster and a ROI file indicating features of interest.
  • PostProcessObjectClassification: Refine an object classification shapefile produced by the TensorFlowObjectClassification task.
  • TensorFlowObjectClassification: Classify a raster using a trained object detection model.
  • TrainTensorFlowObjectModel: Train a TensorFlow model for object detection.

Fixed Issues

  • LEARN-741: The OK button disappeared when selecting a model in the "Deep Learning Randomize Training Parameters" tool.
  • LEARN-697: An "ENVI Deep Learning 1.1.3 is already installed" message was incorrectly displayed when trying to install 1.1.3.
  • LEARN-699: The number of training samples was limited to 32,767.
  • LEARN-820: A "Failed to execute script dlengineTraceback" error message was issued with supported graphics cards and driver versions.