X

NV5 Geospatial Blog

Each month, NV5 Geospatial posts new blog content across a variety of categories. Browse our latest posts below to learn about important geospatial information or use the search bar to find a specific topic or author. Stay informed of the latest blog posts, events, and technologies by joining our email list!



Mapping Earthquake Deformation in Taiwan With ENVI

Mapping Earthquake Deformation in Taiwan With ENVI

12/15/2025

Unlocking Critical Insights With ENVI® Tools Taiwan sits at the junction of major tectonic plates and regularly experiences powerful earthquakes. Understanding how the ground moves during these events is essential for disaster preparedness, public safety, and building community resilience. But traditional approaches like field... Read More >

Comparing Amplitude and Coherence Time Series With ICEYE US GTR Data and ENVI SARscape

Comparing Amplitude and Coherence Time Series With ICEYE US GTR Data and ENVI SARscape

12/3/2025

Large commercial SAR satellite constellations have opened a new era for persistent Earth monitoring, giving analysts the ability to move beyond simple two-image comparisons into robust time series analysis. By acquiring SAR data with near-identical geometry every 24 hours, Ground Track Repeat (GTR) missions minimize geometric decorrelation,... Read More >

Empowering D&I Analysts to Maximize the Value of SAR

Empowering D&I Analysts to Maximize the Value of SAR

12/1/2025

Defense and intelligence (D&I) analysts rely on high-resolution imagery with frequent revisit times to effectively monitor operational areas. While optical imagery is valuable, it faces limitations from cloud cover, smoke, and in some cases, infrequent revisit times. These challenges can hinder timely and accurate data collection and... Read More >

Easily Share Workflows With the Analytics Repository

Easily Share Workflows With the Analytics Repository

10/27/2025

With the recent release of ENVI® 6.2 and the Analytics Repository, it’s now easier than ever to create and share image processing workflows across your organization. With that in mind, we wrote this blog to: Introduce the Analytics Repository Describe how you can use ENVI’s interactive workflows to... Read More >

Deploy, Share, Repeat: AI Meets the Analytics Repository

Deploy, Share, Repeat: AI Meets the Analytics Repository

10/13/2025

The upcoming release of ENVI® Deep Learning 4.0 makes it easier than ever to import, deploy, and share AI models, including industry-standard ONNX models, using the integrated Analytics Repository. Whether you're building deep learning models in PyTorch, TensorFlow, or using ENVI’s native model creation tools, ENVI... Read More >

1345678910Last
8539 Rate this article:
No rating

Optimizing Max Kernel Operation in IDL

Anonym

I found this optimization question on the comp.lang.idl-pvwave, and decided to give it a try. The question was to implement an algorithm that replaces every element in a 2-D array with its neighborhood maximum value. In this case the neighborhood size was 101x101 (i.e. -50 to +50), and the array size was 3200x3248. The nested FOR loop approach looks like the following code snippet.

  data = randomu(seed,3200,3248)
  dim = size(data,/dimension)
  nx = dim[0]
  ny = dim[1]
 
  t0 = tic('Nested FOR')
  result2 = data
  for i=0,nx-1 do begin
    for j=0,ny-1 do begin
      result2[i,j] = max(data[(i-50)>0:(i+50)<(nx-1),(j-50)>0:(j+50)<(ny-1)])
    endfor
  endfor
  toc,t0
 

My first thought was to use the > operator which returns the maximum of 2 arguments. It operates on arrays, and in conjunction with the SHIFT function it serves to return the larger of 2 neighbors. The other trick here is that since we are looking for a 101x101 neighborhood maximum, we can use a combination of smaller neighborhood maxima as input in a structured way in order to achieve the exact 101x101 neighborhood size. The code that I ended up with after some trial and error was the following.

  t0 = tic('Iterative >')
  ; Using SHIFT and > in an iterative way
  padded = replicate(min(data),size(data,/dimension)+100)
  padded[50,50] = data
  tmp3 = shift(padded,1,0) > padded > shift(padded,-1,0)
  tmp9 = shift(tmp3,3,0) > tmp3 > shift(tmp3,-3,0)
  tmp27 = shift(tmp9,9,0) > tmp9 > shift(tmp9,-9,0)
  tmp81 = shift(tmp27,27,0) > tmp27 > shift(temporary(tmp27),-27,0)
  tmp99 = shift(tmp9,44,0) > temporary(tmp81) > shift(temporary(tmp9),-44,0)
  tmp101 = shift(tmp3,49,0) > temporary(tmp99) > shift(temporary(tmp3),-49,0)
 
  ; Same for Y-dim
  tmp3 = shift(tmp101,0,1) > tmp101 > shift(temporary(tmp101),0,-1)
  tmp9 = shift(tmp3,0,3) > tmp3 > shift(tmp3,0,-3)
  tmp27 = shift(tmp9,0,9) > tmp9 > shift(tmp9,0,-9)
  tmp81 = shift(tmp27,0,27) > tmp27 > shift(temporary(tmp27),0,-27)
  tmp99 = shift(tmp9,0,44) > temporary(tmp81) > shift(temporary(tmp9),0,-44)
  tmp101 = shift(tmp3,0,49) > temporary(tmp99) > shift(temporary(tmp3),0,-49)
 
  result1 = (temporary(tmp101))[50:50+nx-1,50:50+ny-1]
  toc,t0
 

I didn’t say that optimized code always looks pretty, but the goal here is to run fast. Adding in some result comparison checking to make sure the results are equivalent.

 
  print, array_equal(result1,result2) ? 'Results are matching' : 'SOMETHING went wrong'
 

Finally, here are the results, which yielded an impressive amount of speed-up, the execution time went from 189.4 seconds to 1.4 seconds, and the results are identical:

  % Time elapsed Nested FOR: 189.39470 seconds.
  % Time elapsed Iterative >: 1.4241931 seconds.
  Results are matching
 
 
 
Please login or register to post comments.