X

NV5 Geospatial Blog

Each month, NV5 Geospatial posts new blog content across a variety of categories. Browse our latest posts below to learn about important geospatial information or use the search bar to find a specific topic or author. Stay informed of the latest blog posts, events, and technologies by joining our email list!



From Image to Insight: How GEOINT Automation Is Changing the Speed of Decision-Making

From Image to Insight: How GEOINT Automation Is Changing the Speed of Decision-Making

4/28/2025

When every second counts, the ability to process geospatial data rapidly and accurately isn’t just helpful, it’s critical. Geospatial Intelligence (GEOINT) has always played a pivotal role in defense, security, and disaster response. But in high-tempo operations, traditional workflows are no longer fast enough. Analysts are... Read More >

Thermal Infrared Echoes: Illuminating the Last Gasp of a Dying Star

Thermal Infrared Echoes: Illuminating the Last Gasp of a Dying Star

4/24/2025

This blog was written by Eli Dwek, Emeritus, NASA Goddard Space Flight Center, Greenbelt, MD and Research Fellow, Center for Astrophysics, Harvard & Smithsonian, Cambridge, MA. It is the fifth blog in a series showcasing our IDL® Fellows program which supports passionate retired IDL users who may need support to continue their work... Read More >

A New Era of Hyperspectral Imaging with ENVI® and Wyvern’s Open Data Program

A New Era of Hyperspectral Imaging with ENVI® and Wyvern’s Open Data Program

2/25/2025

This blog was written in collaboration with Adam O’Connor from Wyvern.   As hyperspectral imaging (HSI) continues to grow in importance, access to high-quality satellite data is key to unlocking new insights in environmental monitoring, agriculture, forestry, mining, security, energy infrastructure management, and more.... Read More >

Ensure Mission Success With the Deployable Tactical Analytics Kit (DTAK)

Ensure Mission Success With the Deployable Tactical Analytics Kit (DTAK)

2/11/2025

In today’s fast-evolving world, operational success hinges on real-time geospatial intelligence and data-driven decisions. Whether it’s responding to natural disasters, securing borders, or executing military operations, having the right tools to integrate and analyze data can mean the difference between success and failure.... Read More >

How the COVID-19 Lockdown Improved Air Quality in Ecuador: A Deep Dive Using Satellite Data and ENVI® Software

How the COVID-19 Lockdown Improved Air Quality in Ecuador: A Deep Dive Using Satellite Data and ENVI® Software

1/21/2025

The COVID-19 pandemic drastically altered daily life, leading to unexpected environmental changes, particularly in air quality. Ecuador, like many other countries, experienced significant shifts in pollutant concentrations due to lockdown measures. In collaboration with Geospace Solutions and Universidad de las Fuerzas Armadas ESPE,... Read More >

1345678910Last
«May 2025»
SunMonTueWedThuFriSat
27282930123
45678910
11121314151617
18192021222324
25262728293031
1234567
15742 Rate this article:
5.0

The Promise of Big Data

Big Challenges Mean Big Opportunities

Anonym

The exciting thing about big data is that it is big. At the same time, that is also the very challenge that big data presents. By definition, big data is too huge, flows at too high a velocity, is too complex and too unstructured to be processed in an acceptable amount of time using traditional data management and data processing technologies. To extract value from such data, we must employ novel, alternative means of processing it. It is in that challenge of having to follow - or create - a new way of doing things that true opportunity presents itself.

Discussions about big data often refer to the "three Vs" model of big data as being extreme in one or more aspects of volume, velocity and variety. Being big in volume means data sets with sizes that exceed the capacity of conventional database infrastructures and software tools to capture, curate, and process it. Questions of volume usually present the most immediate challenge to traditional IT practices, requiring dynamically scalable storage architectures and distributed querying and analytic capabilities.

"

Growth of and Digitization of Global Information Storage Capacity" by Myworkforwiki is licensed under CC BY-SA 3.0

 

Data velocity - the rate at which data flows in and out of an organization - is following its counterpart volume along a curve of exponentially increasing growth. The driving force behind both is clearly our increasingly instrumented and sensor-infused world. Online and embedded systems are capable of capturing and compiling voluminous logs and histories of every transaction and data collection point far beyond current capabilities to effectively process them. The modern ubiquity of smart phones and mobile devices has already created a futuristic reality where every individual has the capacity to be an autonomous source of streaming image, audio and geospatial data.

It is not just the rate of data that is being taken in that is crucial when considering data velocity. What may be more important is the speed at which the calculated or derived data product can be returned, taking data from input through to decision in the feedback loop. The value of some data is intrinsically linked to its currency, rapidly losing its value with each passing moment. In order to make use of such data, a solution may need to be able to return results in near real-time. Such requirements have been a key motivation in the growing adoption of NoSQL databases.

The notion of data variety reflects the tendency of big data systems to deal with diverse, unstructured source data. Unlike traditional architectures based on highly structured data relationships, big data processing seeks to extract order and meaning from highly dissimilar, heterogeneous and disparate data streams. Text feeds from social networks, imagery data, raw signal information and emails are just a few examples of the things that a big data application draw information from.

Essentially, big data uses statistical inference and nonlinear system identification methods to infer relationships, effects and dependencies from large data sets, and to perform inductive predictions of outcomes and behaviors. We can expect that big data processing will continue to move further into the IT mainstream and benefit from the economies and efficiencies of commodity hardware, cloud architectures and open-source software. As we do so, there will certainly be no shortage of challenges needing to be overcome, and doubtless many opportunities with potential for reward.

Please login or register to post comments.