22540 Rate this article:

Advanced Analytics for Rooftop Assessment with LiDAR


Over the course of the past several decades, remote sensing processes have been fine-tuned and enhanced to develop the modern analysis tools we utilize today to create information products from satellite, airborne, and UAS collection platforms. Historically the data coming off of these platforms have traditionally been 2D imagery, and our current processing tools provide us with the analysis capabilities we need to solve universal geospatial problems.

Moving beyond the traditional 2D image collects, modern platforms also enable the ability to collect LiDAR (Light Detection and Ranging) data. LiDAR is a remote sensing technology that uses light pulses to measure the distance between a sensor and reflecting objects such as the Earth’s surface, buildings, and trees. The result is a collection of data points called a “point cloud” that are used to precisely render detailed geographic products including accurate surface and bare earth elevation models as well as identifying locations and characteristics of 3D objects.

LiDAR technology has made it possible to map large geographic areas with a level of detail that was previously only possible with time-consuming and expensive ground surveys. These benefits have led organizations to use LiDAR data as an additional data layer when mapping and making decisions critical to their business.


Identifying characteristics of building rooftops including orientation, slope, and size has been a problem that was not easily solved using traditional 2D image collects. Traditional imagery will at best enable the user to identify the location of the buildings within a scene. The accuracy of extracting features such as buildings is largely dependent upon the spatial and spectral resolution of the data.

At best, when sufficient spectra are available, one can easily remove vegetation and create a very accurate impervious surface map (ISM). However in a suburban scene for example, it is sometimes very difficult to delineate where a rooftop stops and a driveway starts.

Similarly, traditional 2D image sources may not have the spatial resolution to capture building edges thus the quality of footprint extraction is compromised and often times organizations rely on costly and time-consuming manual footprint delineation. Additionally, 2D images lack the elevation required to perform topographic analyses required to characterize the scene.


Utilizing LiDAR data introduces a 3D source of information that is exploited to enhance the analysis workflow and extract building footprints and elevation products to create the orientation, slope, and size information products described. ENVI LiDAR enables ingest of many different data sources including LAS, LAZ, NITF, and simple ASCII XYZ. Once the LiDAR data are ingested, production processes including automated feature extraction are bare earth and surface modeling are performed.

ENVI LiDAR Workflow with High Point Density

The following workflow describes the process of using LiDAR data with ENVI LiDAR and ENVI to find rooftops suitable for solar panel placement. The dataset used is a sample dataset shipped with ENVI LiDAR consisting of 6,436,045 points with an average point density of 21.726 points/m2. Point density is important for automated feature extraction and it is recommended that data have a minimum of 6 points/m2 for successful automated building footprint extraction (roof gable extraction requires a slightly higher point density). That said, ENVI LiDAR can be used in combination with IDL to find building locations for datasets with points as sparse as < 1point/m2 as described in the following section.

Step 1: Automated Processing for Building Extraction, Digital Elevation Model (DEM) extraction, and Digital Surface Model (DSM) extraction:

The ENVI LiDAR user interface (UI) is used to ingest any number of LiDAR tiles and create a project. This project is then processed automatically to extract any number of features including those used for this project (Buildings, DSM, DEM), as well as other products such as trees, power lines, TIN surfaces, contours, LAS output, orthoTIFF rasters, and other products such as canopy density models and viewshed analysis products. In Step 1 of this workflow, building footprints and a DSM and DEM are extracted.

Figure 1Figure 1: Results from rooftop extraction. Building footprint vectors (red) are overlaid onto LiDAR-derived digital surface model (DSM) and displayed in ENVI.

Step 2: Push products to ENVI:

The ENVI LiDAR UI enables single-click capability to push derived LiDAR products to ENVI or ArcMap for downstream processing.

Figure 2

Step 3: Create a Height Model

A height model is an accurate description of objects on top of the surface in the form of a raster product. Height models are created by subtracting the bare earth from the surface:

DSM – DEM = Height Model

Using band math in ENVI, a height model is created for further analysis:

Figure 3Figure 3: Height model calculated as DSM-DEM using Band Math in ENVI.

Step 4: Create Topographic Modeling Products including Slope and Aspect

Areas suitable for solar panel placement are described as surfaces that are South-facing (180 degrees +/- 30 degrees) and also have a slope of 20-40 degrees. These inputs are generated using ENVI’s Topographic Modeling Tools with the height model as input.

Figure 4 Figure 4: Aspect map generated using the topographic modeling tools in ENVI

Step 5: Subset Topographic Products by Building Footprints

Using ENVI, the data for further processing can be constrained only to the portions that are relevant to the analysis. This step greatly reduces processing time for large datasets. To accomplish this, the Subset Data by ROI (region of interest) tool is used in ENVI. In the resulting products, only the building pixels remain, where the background pixels are set to NaNl, see Figure 5.

Figure 5 Figure 5: Aspect map subset by building footprints

Step 6: Determine Areas Appropriate for Solar Panel Placement

Areas appropriate for solar panel placement are those that have aspect from 150-210 degrees and also have a slope of 20-40 degrees. There is more than one way to find pixels that meet both these criteria. In this example, thresholds were applied to the data histograms using the Raster Color Slice Tool in ENVI.

Figure 6 Figure 6: Find areas where slope is between 20 and 40 degrees

Figure 7 Figure 7: Find areas where aspect is between 150 and 210 degrees

Once the data are constrained to the processing parameters, band math is used to find where both criteria are met. The following expression adds the results from above together and where the sum equals 2 the pixel is turned on. Where the sum is not equal to 2 (does not meet both criteria) the pixel is turned off:

((b1+b2) eq 2) * 1 + ((b1+b2) ne 2) * 0

Figure 8 Figure 8: Red areas indicate where rooftops meet both criteria of aspect from 150-210 and slope from 20-40.

Further processing to eliminate areas by area can be addressed using post-classification techniques or accomplished by utilizing information in the area column of the associated vector attributes.

ENVI LiDAR Workflow with Low Point Density

While it is important to have a high point density for automated extraction of building features from LiDAR, it is entirely possible to extract building footprints using ENVI LiDAR to classify the point cloud, and then exploit the LAS classifications using IDL (the Interactive Data Language) to separate surface features from the ground. False positives such as trees are removed either by utilizing NIR information from imagery, or by merging address points with surface feature polygons within a GIS.

The following workflow demonstrates this process using a dataset courtesy of OpenTopography: http://www.opentopography.org/. This dataset contains 1,887,574 points with an average point density of 1.918 points/m2.

Step 1: Classify low density LAS dataset with ENVI LiDAR

The first step is to classify the point cloud using ENVI LiDAR. This will separate points into terrain and non-terrain classes.

Figure 9 Figure 9: Low density point cloud open in ENVI LiDAR.

Once the data are imported to ENVI LIDAR, a DSM and DEM are generated. In the process, all points will be classified to the LAS 1.4 specification and parameter settings within ENVI LiDAR. The figure below indicates points that were classified as terrain. Note that the buildings and other surface features were not classified as terrain (displayed below as data voids). This observation will be exploited to find the building and surface feature polygons.

Figure 10 Figure 10: Points classified as terrain in ENVI LiDAR.

Step 2: Process Classified Dataset Using IDL

The classified LAS dataset is opened in IDL and a script is run to find all the points within the dataset that are not classified as terrain. As seen in figure 10, these points represent the surface features. A raster product is output to create a DSM where anything that was not classified as terrain is assigned a value of 0. Once processed, a new LAS dataset is export where all surface features have an elevation of 0. The DSM and DEM are then regenerated in ENVI LiDAR and pushed to ENVI for further analysis.

Step 3: Extract Surface Features

The first step to extracting surface features is to generate a raster from the LAS output that was generated in step 2. To accomplish that, the ENVI Convert LAS File to Raster/Vector tool is used.

Figure 11

The raster derived from the classified LAS can be threshold to include only the above ground points. The Raster Color Slice tool in ENVI is used and the result is saved as a vector layer. Once saved, the vector layer can be cross-referenced with the address information in the GIS to include only those polygons that intersect with address points from the database.

If address or building information is not available as a GIS layer, post-cleanup is still achievable with many different approaches. For example, surface features such as vegetation can be removed using a corresponding image with NIR (near infrared) information. Small features such as cars can be filtered based on size and other false positives or noise can be further reduced using post-classification tools such as sieve and majority/minority analysis. Object-based feature extraction is another option to exclude features based on size and shape. If the dataset is small, manual editing can also be employed.

Once the vectors are created and false negatives are removed, the result will be a layer containing building footprint polygons. The DSM from the original dataset is then subset by the vector layer, and steps four through six in the previous example are followed to perform the final analysis.