X
10952 Rate this article:
5.0

Finding Red Lights with Harris Deep Learning

Anonym

 

It’s been almost a year, but I finally have found the time to train a neural network with the job of finding red traffic lights in imagery using our machine learning technology that we call MEGA. In a post in late 2016 I began with example labeling and patch harvesting for the training process, and today I’d like to discuss that process and the results!

After labeling images with examples of red lights, and many more examples of what is not a red light, the MEGA neural network is trained with patches that are clipped out of the imagery. Over time, the MEGA trainer uses a convolutional network to guess at what it believes each training patch to be. Whether correct or incorrect, the weights of the network are then modified so that the model becomes more accurate as the training continues. It can be thought of as a really fancy guess and check.

Once the network has been trained, you can give it completely new imagery that it has never seen before for classification. Below is the test image that we’ll use, which was NOT used in the training (that would be cheating).

Image

This is the classification image for red lights, in which the units are the decimal probability (0.0 – 1.0) that each pixel is a red light.

Image

And here is the heatmap overlaid on the image. You’ll notice that the edges are not classified, as each step in the convolution over the image must have a full patch, so half a patch width on the edge of the image is not classified. For this model, I used a square patch size of 31 pixels.

Image

The model is correctly identifying all red lights in the scene, but is also picking up on a couple more features in the image that look similar. Specifically, it is seeing the red tail light on the black vehicle, and the corner of an orange construction sign. To improve the model, these examples could be added to the “not light” class and re-trained.

In my experience, it is much less time consuming to create a model that finds a target than a model that finds ONLY that target. It takes some time to weed out what we’ve been calling “confusers” from the classification.

As far as accuracy goes between basic feature extraction and deep learning, deep learning has an advantage here because basic feature extraction will pick up the red circle on the sign that reads "NO TURN ON RED".

If you are interested in finding out more about Harris Deep Learning technologies contact us at GeospatialInfo@NV5.com.