X

NV5 Geospatial Blog

Each month, NV5 Geospatial posts new blog content across a variety of categories. Browse our latest posts below to learn about important geospatial information or use the search bar to find a specific topic or author. Stay informed of the latest blog posts, events, and technologies by joining our email list!



Comparing Amplitude and Coherence Time Series With ICEYE US GTR Data and ENVI SARscape

Comparing Amplitude and Coherence Time Series With ICEYE US GTR Data and ENVI SARscape

12/3/2025

Large commercial SAR satellite constellations have opened a new era for persistent Earth monitoring, giving analysts the ability to move beyond simple two-image comparisons into robust time series analysis. By acquiring SAR data with near-identical geometry every 24 hours, Ground Track Repeat (GTR) missions minimize geometric decorrelation,... Read More >

Empowering D&I Analysts to Maximize the Value of SAR

Empowering D&I Analysts to Maximize the Value of SAR

12/1/2025

Defense and intelligence (D&I) analysts rely on high-resolution imagery with frequent revisit times to effectively monitor operational areas. While optical imagery is valuable, it faces limitations from cloud cover, smoke, and in some cases, infrequent revisit times. These challenges can hinder timely and accurate data collection and... Read More >

Easily Share Workflows With the Analytics Repository

Easily Share Workflows With the Analytics Repository

10/27/2025

With the recent release of ENVI® 6.2 and the Analytics Repository, it’s now easier than ever to create and share image processing workflows across your organization. With that in mind, we wrote this blog to: Introduce the Analytics Repository Describe how you can use ENVI’s interactive workflows to... Read More >

Deploy, Share, Repeat: AI Meets the Analytics Repository

Deploy, Share, Repeat: AI Meets the Analytics Repository

10/13/2025

The upcoming release of ENVI® Deep Learning 4.0 makes it easier than ever to import, deploy, and share AI models, including industry-standard ONNX models, using the integrated Analytics Repository. Whether you're building deep learning models in PyTorch, TensorFlow, or using ENVI’s native model creation tools, ENVI... Read More >

Blazing a trail: SaraniaSat-led Team Shapes the Future of Space-Based Analytics

Blazing a trail: SaraniaSat-led Team Shapes the Future of Space-Based Analytics

10/13/2025

On July 24, 2025, a unique international partnership of SaraniaSat, NV5 Geospatial Software, BruhnBruhn Innovation (BBI), Netnod, and Hewlett Packard Enterprise (HPE) achieved something unprecedented: a true demonstration of cloud-native computing onboard the International Space Station (ISS) (Fig. 1). Figure 1. Hewlett... Read More >

1345678910Last
19702 Rate this article:
No rating

How many frames are in a movie?

Anonym
In IDL 8.2.3, we introduced video read capabilities, though the IDLffVideoRead class and the READ_VIDEO function, to complement the video write capabilities available since IDL 8.1. I'm still new to using video as a data format, so I thought I'd post an example of something interesting that I learned recently. We've included a video of a coronal mass ejection viewed from NASA's SDO and SOHO spacecraft in the IDL distribution:
IDL> video_file = file_which('CME.mp4')
How many frames are in this video file? The answer isn't as simple I'd expected. Start with QUERY_VIDEO, which can return a structure of information about a video file:
IDL> !null = query_video(video_file, video_info)
IDL> print, video_info.num_frames
         574
There are 574 frames in the file. But wait, let's try to read the entire file into IDL with READ_VIDEO:
IDL> movie = read_video(video_file, /all)
IDL> help, movie
MOVIE           BYTE      = Array[3, 512, 288, 564]
READ_VIDEO returns frames as pixel-interleaved RGB images. So I guess there are only 564 frames in the file? Let's turn to the lower-level API exposed in IDLffVideoRead to check whether it provides different information. The GetStreams method gives information about the single video stream in the file:
IDL> v = idlffvideoread(video_file)
IDL> print, (v.getstreams()).count
         574
If you look at the source code for QUERY_VIDEO, you'll see that it uses this technique for returning the frame count. But what about iterating through the file, reading frame by frame, until the end is reached?  This code block:
   i = 0
   repeat begin
      data = v.getnext(type=t)
      ++i
   endrep until t eq -1
does so. The result:
IDL> print, --i
         564
OK, why are there two different values for the number of frames in the video file? I asked an engineer on the IDL team, Andrew Magill, who is far more knowledgeable about video than I am, about this. He offered a pair of possibilities:
It's possible FFmpeg doesn't actually know ahead of time how many frames there are. The number out of ::GetStreams might be an estimate based on video length, framerate, file size, etc. Or maybe there are actually 574 frames, but the last 9 can't be decoded.
Andrew also gave some technical details that I haven't included, and suggested that these may not be the only possibilities. Further, I thought his summary was enlightening:
Unfortunately, video technology is full of these little technical gotchas, and seems to be full of questions that can only be answered with "well, it depends".  FFmpeg can seem really inconsistent sometimes, but I think they've done a heroic job of making all these different standards work almost exactly the same.
I hope that through Andrew's work, and the power of FFmpeg, we can make video processing a straightforward task in IDL. I'll post other examples of working with video as I learn more about it!
Please login or register to post comments.