X

NV5 Geospatial Blog

Each month, NV5 Geospatial posts new blog content across a variety of categories. Browse our latest posts below to learn about important geospatial information or use the search bar to find a specific topic or author. Stay informed of the latest blog posts, events, and technologies by joining our email list!



Mapping Earthquake Deformation in Taiwan With ENVI

Mapping Earthquake Deformation in Taiwan With ENVI

12/15/2025

Unlocking Critical Insights With ENVI® Tools Taiwan sits at the junction of major tectonic plates and regularly experiences powerful earthquakes. Understanding how the ground moves during these events is essential for disaster preparedness, public safety, and building community resilience. But traditional approaches like field... Read More >

Comparing Amplitude and Coherence Time Series With ICEYE US GTR Data and ENVI SARscape

Comparing Amplitude and Coherence Time Series With ICEYE US GTR Data and ENVI SARscape

12/3/2025

Large commercial SAR satellite constellations have opened a new era for persistent Earth monitoring, giving analysts the ability to move beyond simple two-image comparisons into robust time series analysis. By acquiring SAR data with near-identical geometry every 24 hours, Ground Track Repeat (GTR) missions minimize geometric decorrelation,... Read More >

Empowering D&I Analysts to Maximize the Value of SAR

Empowering D&I Analysts to Maximize the Value of SAR

12/1/2025

Defense and intelligence (D&I) analysts rely on high-resolution imagery with frequent revisit times to effectively monitor operational areas. While optical imagery is valuable, it faces limitations from cloud cover, smoke, and in some cases, infrequent revisit times. These challenges can hinder timely and accurate data collection and... Read More >

Easily Share Workflows With the Analytics Repository

Easily Share Workflows With the Analytics Repository

10/27/2025

With the recent release of ENVI® 6.2 and the Analytics Repository, it’s now easier than ever to create and share image processing workflows across your organization. With that in mind, we wrote this blog to: Introduce the Analytics Repository Describe how you can use ENVI’s interactive workflows to... Read More >

Deploy, Share, Repeat: AI Meets the Analytics Repository

Deploy, Share, Repeat: AI Meets the Analytics Repository

10/13/2025

The upcoming release of ENVI® Deep Learning 4.0 makes it easier than ever to import, deploy, and share AI models, including industry-standard ONNX models, using the integrated Analytics Repository. Whether you're building deep learning models in PyTorch, TensorFlow, or using ENVI’s native model creation tools, ENVI... Read More >

1345678910Last
19226 Rate this article:
No rating

Bridge to Bridge (R to Python to IDL)

Anonym

Within ENVI and IDL there are many different classification schemes and functions that can be used to aid in analyzing your data (ENVI classification documentation). However, on occasion you may notice that we have not had a chance to code in your favorite ones yet or perhaps the method you wish to use is so cutting edge that it is a little ahead of our development cycle. If that is the case then there is a chance that the method you desire may be found within the ever-expanding R statistical package.  As you may have read last week on the IDL Data Point blog we are now able to call R though the use of the IDL to Python Bridge! This week, I wanted to show you another example of how this can be done and some of the tricks to get you going.

If you have not followed the steps on setting up your IDL, Python, and R environments yet, it may be a good idea to do that now. Once you are all up and running we can get into the meat of this post. The purpose of this post is not to go into extensive detail on all of the nuances of these bridges but rather show a fun example of how to visualize a classification tree and point out a few pitfalls along the way.

To begin with, I simply pulled up some data, subsetted it and did a manual classification on the data.  I chose 4 different classes: NPV, VEG, Urban, and Water.

;Start ENVI

e = envi(/current)

if ~OBJ_VALID(e) then e = envi()

 

; Open the test image

file1 = FILEPATH('qb_boulder_msi', ROOT_DIR=e.ROOT_DIR, $

  SUBDIRECTORY = ['data'])

oRaster = e.OpenRaster(file1)

 

; Subset the image

oSubSet = oRaster.Subset(Sub_Rect = [207,705,420,838])

 

; Get the offset for the imagery

xo = oSubset.METADATA['X START']

yo = oSubset.METADATA['Y START']

 

; Pullout the ROI Data

Water = oSubset.GetData(Sub_Rect = [300-xo,800-yo,312-xo,808-yo], interleave='bip')

NPV = oSubset.GetData(Sub_Rect = [242-xo,758-yo,249-xo,765-yo], interleave='bip')

Urban1 = oSubset.GetData(Sub_Rect = [321-xo,736-yo,326-xo,740-yo], interleave='bip')

Urban2 = oSubset.GetData(Sub_Rect = [363-xo,713-yo,366-xo,718-yo], interleave='bip')

Veg = oSubset.GetData(Sub_Rect = [369-xo,738-yo,372-xo,742-yo], interleave='bip')

roi_data = list(Water,NPV,Urban1,Urban2,Veg)

I then structured the data in a way that allows for easier ingestion into the R environment.

; Get the size of the output array

npts = 0

for i = 0 , n_elements(roi_data)-1 do npts = npts + product((size(roi_data[i], /DIMENSIONS))[1:2])

training_data = intarr(oSubSet.nb,npts)

class = strarr(npts)

classes = ['Water', 'NPV', 'Urban', 'Urban', 'Veg']

 

;build the training data

count = 0

for i = 0 , n_elements(roi_data)-1 do begin

  training_data[*, count : count + product((size(roi_data[i], /DIMENSIONS))[1:2])-1] = reform(roi_data[i], oSubSet.nb, product((size(roi_data[i], /DIMENSIONS))[1:2]))

  class[count : count + product((size(roi_data[i], /DIMENSIONS))[1:2])-1] = classes[i]

  count = count + product((size(roi_data[i], /DIMENSIONS))[1:2])

endfor

 

; Collect the input info

classes = strjoin(class,",",/SINGLE)

band1 = training_data[0,*]

band2 = training_data[1,*]

band3 = training_data[2,*]

band4 = training_data[3,*]

Next comes the tricky part.  I called the rpy2 object from inside python and returned the robject to IDL.

; Get Python ready for the new R DataFrame

!null = Python.run('import rpy2.robjects as robjects')

robjects = Python.robjects

Once I had R all ready to go within Python, I wrote some simple R code on the fly within IDL that would allow me to visualize the different band thresholds that made up my manual image classification.

; Define the R function

train_rf = "robjects.r('''" + $

  "train_rf <- function(band1, band2, band3, band4, classes) { \n" + $

  "require(rpart) \n" + $

  "classes = c(unlist(strsplit(classes,','))) \n" + $

  "train_df = data.frame(band1, band2, band3, band4, classes) \n" + $

  "fit = rpart(classes ~ band1 + band2 + band3 + band4, method='class', data=train_df) \n" + $

  "plot(fit, uniform = TRUE, main = 'Classification Tree for Training Data') \n" + $

  "text(fit, use.n=TRUE, all=TRUE, cex=.8) \n" + $

      "}" + $

       "''')"

One possible hang-up here is that R must have the rpart package installed in order for this code to run. I installed this in the main R library directory to make sure that any user on my machine could find this package (i.e. C:\Program Files\R\R-3.1.3\library). As long as this package has been installed the “require(rpart)” portion of the code should not give you any trouble. 

Finally, you will need to publish the newly defined train_rf function up to Python.

!null = Python.run(train_rf)

 

Once it has been recognized by Python, you can bring it into IDL and run it as if it were a native function.

train_rf = robjects.globalenv['train_rf']

 

; Call the r function from within IDL

result = train_rf(band1, band2,Band3, band4, classes)

When you run the code, it should generate something that looks like this!

Please login or register to post comments.