X

NV5 Geospatial Blog

Each month, NV5 Geospatial posts new blog content across a variety of categories. Browse our latest posts below to learn about important geospatial information or use the search bar to find a specific topic or author. Stay informed of the latest blog posts, events, and technologies by joining our email list!



Easily Share Workflows With the Analytics Repository

Easily Share Workflows With the Analytics Repository

10/27/2025

With the recent release of ENVI® 6.2 and the Analytics Repository, it’s now easier than ever to create and share image processing workflows across your organization. With that in mind, we wrote this blog to: Introduce the Analytics Repository Describe how you can use ENVI’s interactive workflows to... Read More >

Deploy, Share, Repeat: AI Meets the Analytics Repository

Deploy, Share, Repeat: AI Meets the Analytics Repository

10/13/2025

The upcoming release of ENVI® Deep Learning 4.0 makes it easier than ever to import, deploy, and share AI models, including industry-standard ONNX models, using the integrated Analytics Repository. Whether you're building deep learning models in PyTorch, TensorFlow, or using ENVI’s native model creation tools, ENVI... Read More >

Blazing a trail: SaraniaSat-led Team Shapes the Future of Space-Based Analytics

Blazing a trail: SaraniaSat-led Team Shapes the Future of Space-Based Analytics

10/13/2025

On July 24, 2025, a unique international partnership of SaraniaSat, NV5 Geospatial Software, BruhnBruhn Innovation (BBI), Netnod, and Hewlett Packard Enterprise (HPE) achieved something unprecedented: a true demonstration of cloud-native computing onboard the International Space Station (ISS) (Fig. 1). Figure 1. Hewlett... Read More >

NV5 at ESA’s Living Planet Symposium 2025

NV5 at ESA’s Living Planet Symposium 2025

9/16/2025

We recently presented three cutting-edge research posters at the ESA Living Planet Symposium 2025 in Vienna, showcasing how NV5 technology and the ENVI® Ecosystem support innovation across ocean monitoring, mineral exploration, and disaster management. Explore each topic below and access the full posters to learn... Read More >

Monitor, Measure & Mitigate: Integrated Solutions for Geohazard Risk

Monitor, Measure & Mitigate: Integrated Solutions for Geohazard Risk

9/8/2025

Geohazards such as slope instability, erosion, settlement, or seepage pose ongoing risks to critical infrastructure. Roads, railways, pipelines, and utility corridors are especially vulnerable to these natural and human-influenced processes, which can evolve silently until sudden failure occurs. Traditional ground surveys provide only periodic... Read More >

1345678910Last
«November 2025»
SunMonTueWedThuFriSat
2627282930311
2345678
9101112131415
16171819202122
23242526272829
30123456
17012 Rate this article:
5.0

Using Deep Learning for Feature Extraction

Anonym

In August, I talked about how to pull features out of images using known spatial properties about an object. Specifically, in that post, I used rule-based feature extraction to pull stoplights out of an image.

Today, I’d like to look in to a new way of doing feature extraction using deep learning technology. With our deep learning tools developed here in house, we can use examples of target data in order to find other similar objects in other images.

In order to train the system, we will need 3 different kinds of examples for the deep learning network to learn what to look for. This will be target, non-target, and confusers. These examples are patches cut out of similar images, and the patches will all be the same size. In my case for this exercise, I've picked a size of 50 by 50 pixels.

The first patch type is actual target data – I’ll be looking for illuminated traffic lights. For the model to work well, we’ll need different kinds of traffic signals, lightning conditions, and camera angles. This will help the network generalize what the object looks like.

Next, we’ll need negative data, or data that does not contain the object. This will be the areas surrounding the target, and other features that will possibly appear in the background of the image. In our case for traffic lights, this will include cars, streetlights, road signs, foliage, and others.

For the final patch type, I went through some images and marked things that may confuse the system. These are called confusers, and will be objects with a similar size, color, and/or shape of the target. In our case, this could be other signals like red arrows or a “don’t walk” hand. I’ve also included some bright road signs and a distant stop sign.

Once we have all of these patches, we can use our machine learning tool known as MEGA to train a neural network that can be used to identify similar objects in other images.

Do note that I have many more patches created than just the ones displayed. With more examples, and more diverse examples, MEGA has a better chance of accurately classifying target vs non-target in an image.

In our case here, we’ll only have three possible outcomes as we look though the image. This will be lights, not lights, and looks-like-a-light classes. If you have many different objects in your scene, you can even get something more like a classification image, as MEGA can be used to identify as many objects in an image as you like. If we wanted to extend this idea here, we could look for red lights, green lights, street lights, lane markers, or other cars. (This is a simple example of how deep learning would be used in autonomous cars!)

To learn more about MEGA and what it can do in your analysis stack, contact our Custom Solutions Group for more details! For my next post, we’ll look at the output from the trained neural network, and analyze the results.

Please login or register to post comments.