X
7 Rate this article:
No rating

Deploy, Share, Repeat: AI Meets the Analytics Repository

Zachary Norman

The upcoming release of ENVI® Deep Learning 4.0 makes it easier than ever to import, deploy, and share AI models, including industry-standard ONNX models, using the integrated Analytics Repository.

Whether you're building deep learning models in PyTorch, TensorFlow, or using ENVI’s native model creation tools, ENVI Deep Learning 4.0 streamlines the path from training to operational deployment. Even better: you can now publish models to a central repository, so your team/organization can easily discover and reuse them without emailing ZIP files or copying folders.

This blog walks you through what ONNX is, how ENVI supports it, and how you can turn a trained model into a shareable, repeatable, deployable AI asset inside the ENVI environment.

Cloud detection in ENVI being done by ONNX models behind the scenes.

ONNX in a Nutshell

ONNX (Open Neural Network Exchange) is an open-source format for representing machine learning and deep learning models. Think of it as the PDF for AI models, a universal file format that lets you move models seamlessly between frameworks.

Instead of being locked into a specific framework, ONNX acts as a bridge between training and deployment. Train in the framework of your choice, export to ONNX format, and deploy in ENVI or other environments.

With ENVI Deep Learning 4.0, you can import ONNX models directly. This means:

  • No need to retrain inside ENVI.
  • Operationalize models built by data science teams or external collaborators.
  • Integrate ONNX models into your image processing workflows just like native ENVI-trained models.

The results? Flexibility, collaboration, and usability across teams, all powered by the Analytics Repository.

 

ONNX, Meet ENVI

Bringing ONNX models into ENVI is straightforward with ENVI’s interface and configuration tools. Cloud detection in ENVI being done by ONNX models behind the scenes.

 

The power of ONNX lies in flexibility: train in any framework (like PyTorch or TensorFlow) and bring your model into ENVI. But with flexibility comes variation. For example, your model might:

  • Expect channels-first inputs instead of channels-last
  • Use z-score normalization instead of dividing values by 255 and normalizing from 0 to 1

To make sure your model behaves as expected, ENVI gives you the ability to customize input and output handling. All you need are two lightweight Python scripts: one to prepare input data and another to map the model outputs to ENVI’s expectations. Here’s an example:

 

Example pre and post processing scripts to plug in a custom ONNX model to ENVI.

 

Other details like input size, number of bands, classes, and class names/colors are simple to include as part of the setup.

Don’t Let Bad Inputs Ruin a Good Model

Importing an ONNX model into ENVI is a big step toward operational AI, but your results will only be as good as the data you feed in.

Many deep learning models are trained on very specific data formats. That might mean byte-scaled imagery (values from 0 to 255 or surface reflectance data (values from 0 to 10,000 or 0.0 to 1.0). Some models expect z-score normalization; others assume certain bands or band order. It gets complicated when we pair that with real-world satellite imagery which can vary a lot.

That’s why aligning your inputs with training data is critical. ENVI gives you the tools to make it easy. Using the ENVI Modeler, you can:

  • Normalize or scale data to match training inputs
    • Example: Apply a stretch, perform atmospheric correction
  • Select and reorder bands as needed
  • Resize or resample imagery to the expected dimensions
    • Example: Match resolution of training data
  • Handle no data values or apply clipping thresholds

For example, here’s how to prepare 4-band, surface reflectance SkySat image for an aircraft detection ONNX model:

  1. Extract RGB bands from 4 band input image
  2. Apply a stretch for a clean byte-scales representation
  3. Run the preprocessed imagery through the AI model and return results.

ENVI Modeler workflow that prepares SkySat imagery for analysis.

These preprocessing steps bridge the gap between raw imagery and AI-ready input, setting your ONNX model up for success. Pair them with basic guides, documentation, or other information about supported data types, and you’ll empower anyone in your organization to get reliable results.

Publish and Share with the Analytics Repository

Once your model and preprocessing workflow are ready, publishing them to the Analytics Repository takes just a few clicks. From there, your models and workflows are:

  • Discoverable by colleagues and teams
  • Reusable across projects without duplication
  • Repeatable for consistent AI-driven results

Here’s a look at the publish dialog in the ENVI Modeler, where both the processing workflow and ONNX model are ready to share.

Screenshot that shows the publish dialog in the ENVI Modeler and where you can publish ONNX models that have been configured to run in ENVI.

Wrapping Up

With ENVI Deep Learning 4.0, you can:

  • Import ONNX models with ease
  • Customize Data handling to match model expectations
  • Preprocess inputs with ENVI’s powerful image tools
  • Publish and share AI workflows in the Analytics Repository

It’s never been easier to go from a trained model to a deployed, shareable AI asset.

Got questions or want to learn more? Feel free to reach out to us.

 

Happy Ai-ing!