X

NV5 Geospatial Blog

Each month, NV5 Geospatial posts new blog content across a variety of categories. Browse our latest posts below to learn about important geospatial information or use the search bar to find a specific topic or author. Stay informed of the latest blog posts, events, and technologies by joining our email list!



Not All Supernovae Are Created Equal: Rethinking the Universe’s Measuring Tools

Not All Supernovae Are Created Equal: Rethinking the Universe’s Measuring Tools

6/3/2025

Rethinking the Reliability of Type 1a Supernovae   How do astronomers measure the universe? It all starts with distance. From gauging the size of a galaxy to calculating how fast the universe is expanding, measuring cosmic distances is essential to understanding everything in the sky. For nearby stars, astronomers use... Read More >

Using LLMs To Research Remote Sensing Software: Helpful, but Incomplete

Using LLMs To Research Remote Sensing Software: Helpful, but Incomplete

5/26/2025

Whether you’re new to remote sensing or a seasoned expert, there is no doubt that large language models (LLMs) like OpenAI’s ChatGPT or Google’s Gemini can be incredibly useful in many aspects of research. From exploring the electromagnetic spectrum to creating object detection models using the latest deep learning... Read More >

From Image to Insight: How GEOINT Automation Is Changing the Speed of Decision-Making

From Image to Insight: How GEOINT Automation Is Changing the Speed of Decision-Making

4/28/2025

When every second counts, the ability to process geospatial data rapidly and accurately isn’t just helpful, it’s critical. Geospatial Intelligence (GEOINT) has always played a pivotal role in defense, security, and disaster response. But in high-tempo operations, traditional workflows are no longer fast enough. Analysts are... Read More >

Thermal Infrared Echoes: Illuminating the Last Gasp of a Dying Star

Thermal Infrared Echoes: Illuminating the Last Gasp of a Dying Star

4/24/2025

This blog was written by Eli Dwek, Emeritus, NASA Goddard Space Flight Center, Greenbelt, MD and Research Fellow, Center for Astrophysics, Harvard & Smithsonian, Cambridge, MA. It is the fifth blog in a series showcasing our IDL® Fellows program which supports passionate retired IDL users who may need support to continue their work... Read More >

A New Era of Hyperspectral Imaging with ENVI® and Wyvern’s Open Data Program

A New Era of Hyperspectral Imaging with ENVI® and Wyvern’s Open Data Program

2/25/2025

This blog was written in collaboration with Adam O’Connor from Wyvern.   As hyperspectral imaging (HSI) continues to grow in importance, access to high-quality satellite data is key to unlocking new insights in environmental monitoring, agriculture, forestry, mining, security, energy infrastructure management, and more.... Read More >

1345678910Last
8082 Rate this article:
No rating

Optimizing Max Kernel Operation in IDL

Anonym

I found this optimization question on the comp.lang.idl-pvwave, and decided to give it a try. The question was to implement an algorithm that replaces every element in a 2-D array with its neighborhood maximum value. In this case the neighborhood size was 101x101 (i.e. -50 to +50), and the array size was 3200x3248. The nested FOR loop approach looks like the following code snippet.

  data = randomu(seed,3200,3248)
  dim = size(data,/dimension)
  nx = dim[0]
  ny = dim[1]
 
  t0 = tic('Nested FOR')
  result2 = data
  for i=0,nx-1 do begin
    for j=0,ny-1 do begin
      result2[i,j] = max(data[(i-50)>0:(i+50)<(nx-1),(j-50)>0:(j+50)<(ny-1)])
    endfor
  endfor
  toc,t0
 

My first thought was to use the > operator which returns the maximum of 2 arguments. It operates on arrays, and in conjunction with the SHIFT function it serves to return the larger of 2 neighbors. The other trick here is that since we are looking for a 101x101 neighborhood maximum, we can use a combination of smaller neighborhood maxima as input in a structured way in order to achieve the exact 101x101 neighborhood size. The code that I ended up with after some trial and error was the following.

  t0 = tic('Iterative >')
  ; Using SHIFT and > in an iterative way
  padded = replicate(min(data),size(data,/dimension)+100)
  padded[50,50] = data
  tmp3 = shift(padded,1,0) > padded > shift(padded,-1,0)
  tmp9 = shift(tmp3,3,0) > tmp3 > shift(tmp3,-3,0)
  tmp27 = shift(tmp9,9,0) > tmp9 > shift(tmp9,-9,0)
  tmp81 = shift(tmp27,27,0) > tmp27 > shift(temporary(tmp27),-27,0)
  tmp99 = shift(tmp9,44,0) > temporary(tmp81) > shift(temporary(tmp9),-44,0)
  tmp101 = shift(tmp3,49,0) > temporary(tmp99) > shift(temporary(tmp3),-49,0)
 
  ; Same for Y-dim
  tmp3 = shift(tmp101,0,1) > tmp101 > shift(temporary(tmp101),0,-1)
  tmp9 = shift(tmp3,0,3) > tmp3 > shift(tmp3,0,-3)
  tmp27 = shift(tmp9,0,9) > tmp9 > shift(tmp9,0,-9)
  tmp81 = shift(tmp27,0,27) > tmp27 > shift(temporary(tmp27),0,-27)
  tmp99 = shift(tmp9,0,44) > temporary(tmp81) > shift(temporary(tmp9),0,-44)
  tmp101 = shift(tmp3,0,49) > temporary(tmp99) > shift(temporary(tmp3),0,-49)
 
  result1 = (temporary(tmp101))[50:50+nx-1,50:50+ny-1]
  toc,t0
 

I didn’t say that optimized code always looks pretty, but the goal here is to run fast. Adding in some result comparison checking to make sure the results are equivalent.

 
  print, array_equal(result1,result2) ? 'Results are matching' : 'SOMETHING went wrong'
 

Finally, here are the results, which yielded an impressive amount of speed-up, the execution time went from 189.4 seconds to 1.4 seconds, and the results are identical:

  % Time elapsed Nested FOR: 189.39470 seconds.
  % Time elapsed Iterative >: 1.4241931 seconds.
  Results are matching
 
 
 
Please login or register to post comments.