The continuing integration of Earth imagery and GIS is revolutionizing the geospatial industry and empowering users.
By Matteo Luccio, founder and president of Pale Blue Dot.
Geospatial technology is changing. Now geographic information system (GIS) software is ingesting, organizing and visualizing data from a rapidly growing assortment of remote sensing platforms and sensors; 2D is transitioning to 3D; and still images are giving way to video.
“The map of the future isn’t a map,” predicted Lawrie Jordan, now Esri’s director of imagery, 20 years ago to a skeptical audience. “It’s a photorealistic, intelligent 3D image that I can fly through and analyze, and hopefully one day it’ll be right beside me as a wearable appliance.”
Now the geospatial industry is getting close to realizing that vision.
Imagery Is the Foundation of GIS
“Virtually every one of the foundation layers on which GIS is built and relies … comes exclusively from remote sensing and imagery,” explains Jordan. “Imagery really is the foundation upon which GIS is built.”
Conversely, he points out, combining multiple types of data yields unique views that can’t be obtained from any one of them individually. “Frequently, we’re seeing users combine multiple types of data to get a new view.”
According to David Glenn, product line director for GIS at Hexagon Geospatial, his customers need some remote sensing capability when they’re primarily performing GIS activities and some GIS capability when their primary work involves remote sensing.
With products like GeoMedia 3D, traditional workflows, such as for asset management or urban planning, are beginning to embrace 3D modeling and visualization as enabling technologies.
In the past couple of years, there has been an explosion of remote sensing platforms, including swarms of unmanned aircraft systems (UASs), dozens of nanosatellites, and DigitalGlobe’s superspectral WordView-3 satellite. The flood of new data also includes oblique imagery, point clouds from light detection and ranging (LiDAR) technology and other devices, and terrestrial imagery from streets and inside buildings. Besides optical imagery, remote sensing platforms also provide hyperspectral and synthetic aperture radar imagery.
“GIS is the logical framework to organize and manage these data,” claims Jordan. “But it’s also the primary beneficiary of them, because they are what keep GIS current and accurate. Remote sensing—and 3D imagery in particular—is becoming the new face of GIS.”
Says Tim Lemmon, Trimble’s marketing director for geospatial office software and applications, “We’ve been focused on collecting data from a variety of sources, whether it’s aerial imagery from a UAS or satellite remote sensing imagery, to process and georeference the imagery into high-quality data that can be used to create surface models, orthomosaics, etc.”
Adds Todd Steiner, Trimble’s marketing director in the imaging business area, microsats and UASs are making it much easier and cheaper “to capture various types of data at various accuracies for various applications. Pre-planning for a job out in the field can now be done using free satellite data or Google Earth imagery. This is freeing up the expertise to focus on the data and the deliverables.”
The geospatial industry is progressing beyond static remote sensing images toward dynamic image services in which the imagery is mosaicked, processed, rectified, enhanced on the fly and delivered in near real time to users.
Soon, predicts Craig Brower, National Geospatial-Intelligence Agency account manager with BAE Systems, today’s flood of remote sensing data will make it possible, in an unclassified setting, to “pull all these pieces together” and “start seeing persistence.”
Microsatellite providers, such as Skybox Imaging and UrtheCast, provide rapid revisit from space and the ability to view areas denied to UASs and other aerial platforms.
“This will flood the industry with data,” says Beau Legeer, manager of U.S. sales and services at Exelis Visual Information Solutions. “There’s no way for everyone to consume all the data locally, so this is going to drive cloud-based data processing and different types of analytics, such as monitoring parking lots and sporting events.”
From Static Images to Dynamic Mosaics and Video
According to Jordan, the geospatial industry is progressing beyond static remote sensing images toward dynamic image services in which the imagery is mosaicked, processed, rectified, enhanced on the fly and delivered in near real time to users. Additionally, now it’s practical to perform analytics on the fly.
“The benefit to the user is that it greatly reduces the latency, because you don’t have to download and process individual files separately,” he explains. “We can organize and manage massive collections using these advanced formats we call mosaic datasets.”
The Drawing Toolbox in BAE’s SOCET GXP allowed users to extract structures under construction for the 2012 Summer Olympics in London.
Hexagon Geospatial is gathering data from European countries to mosaic each country as a single LiDAR dataset.
“The trick is to make that dataset work and perform as a single tile,” explains Steve du Plessis, the company’s product line director for remote sensing. “We’re at a point where we can compress and mosaic an entire country and have the performance you’d expect from a tiled dataset. Then we’re going to go for the world. We just need a big enough computer!”
With georeferenced video from UASs, says Glenn, “there’s a real need to mosaic the frames together to produce output images that can feed into the image analysis tools downstream.”
Full-motion, wide-area video, once a defense niche, is the latest data type to be integrated into GIS. According to Jordan, in Esri’s new desktop ArcGIS Pro app, users can import full-motion video and oblique imagery and see both in near real time.
“So now we support not only map space, but image space,” he explains. “We can bring the map to the image, rather than only the image to the map.”
Despite the tremendous worldwide growth in the number of UASs and sensors, a standard hasn’t yet emerged. “That needs to improve before the true value of georeferenced video is realized,” says Glenn.
Increasingly, remote sensing specialists are using the time features of mosaic datasets, which can hold multi-temporal data sets. “This is where the cloud and mosaic datasets and image services really shine,” points out Legeer, “because they give the user access to what can appear to be almost decades of data.”
Feature Extraction and Change Detection
Feature extraction enables users to create rules to automatically extract from imagery objects, features or layers that they can then compare over time to identify changes. Now such capability is standard in image analysis software such as Trimble’s eCognition. However, most automatic feature extraction still requires some human intervention.
“Some things can be automated, like bare ground and trees and water bodies and buildings and a few larger, standardized features,” explains du Plessis. “But as soon as you get into complex structures, such as power lines, it becomes more difficult to reduce the human intervention.”
However, according to Lemmon, it’s getting much easier to turn LiDAR data types or aerial imagery into computer-aided design models or plan views for civil engineers and other users. “The semiautomated workflows, or workflows where different software systems or data formats are linked together in an intuitive way, have been progressing rapidly during the last few years,” he says.
For Hexagon Geospatial, automatic feature extraction from point clouds is the newest technology driver. “The demand for appropriate processing of point cloud data from scanners is really high in the industry now,” says Glenn, “and there’s the belief we should be able to extract the information from these point clouds and integrate that information into GIS and update our GIS from it.”
Brower links advances in feature extraction to those in 3D: “Once you have a 3D reference of the world, it’s much easier to tell where the changes are,” he explains.
“We’re looking at sensor fusion and combining elevation data with spectral bands and trying to find new ways to extract features,” relates du Plessis.
With regards to change detection, Hexagon Geospatial’s aim is to reduce the noise.
“When you compare two dates of imagery, you get all kinds of uninteresting changes, like vehicles moving up and down the streets, shadows and building leans, and sometimes seasonal changes that don’t interest you,” says du Plessis. “So the algorithms need to get more sophisticated to find those changes that are valuable to you. Our recent software release makes tremendous strides in this direction.”
To improve change detection, Hexagon Geospatial is deriving dense point clouds from stereo imagery from traditional aerial photography and UAS imagery and fusing the elevation points into the imagery as an nth band in a layered image stack.
“Then we can use the spectral information, as well as the elevation information, to derive new data products,” explains du Plessis. An example of an application is identifying buildings that collapsed vertically (“pancaked”) in an earthquake without changing their footprint.
Says Legeer, “We’re going toward automatically extracting key features, such as roads, buildings, rooftops and vehicles, using some complex techniques that involve advanced learning algorithms, but we’re also looking to move that into video. We have a product called Jagwire that manages video assets, and we’re looking to bring automatic feature extraction and tracking into that system so we can automatically identify features of interest in a video and highlight them.”
Exelis can extract features on the ground from terrestrial mobile scanning data and inventory them. In the past, this was a time-consuming, manual process, according to Legeer. Now a user can drive through a city and then have an algorithm count the number of park benches, fire hydrants or street signs without sending anyone into the field.
3D Is Becoming Mainstream
Traditional workflows, such as for asset management or urban planning, are beginning to embrace 3D modeling and visualization as enabling technologies.
According to Glenn, the ultimate aim is to fuse into one single view the information in the GIS with data from remote sensing and from building information modeling systems so as to make the 3D view just as much an index into the GIS database as the 2D map is today.
A key trend currently is “using rendering engines and large data handling out of the gaming industry and applying it to geospatial data types,” says Lemmon. “We’re enhancing our software so our customers can create and visualize 3D data sets.”
“With our GXP software suite, we help people put their information into a 3D environment in a precise way,” explains Brower. “Google and Apple also have come on the market with 3D solutions in their mapping platforms. We recently partnered with Airbus and, as part of that partnership, we’ll be developing several elevation products that will end up being the foundation for that 3D environment. We can take (Airbus) radar imagery and turn it into 3D products.”
Says Legeer, “We recently increased the capabilities to go into the 3D world when we introduced our ENVI LiDAR product. Now we’re working on bringing the 3D visualization components, specifically LiDAR, to the Web. That’s a major advance, because 3D data, whether they’re point cloud data or digital elevation models, present a storage challenge.”
Photogrammetry advances include using new algorithms that derive point clouds from stereo pairs. For example, using a new algorithm called semi-global matching, “You get very dense point clouds in which every pixel has an elevation point that’s exactly correlated to the image,” explains du Plessis. “Then you can use these point clouds, just like you’d use terrestrial, mobile or airborne LiDAR point clouds, to create 3D models.”
Using a geospatial analysis product like ENVI, users can fuse data from multispectral imagery and LiDAR point clouds to get a better picture of a scene.
As different data types become more easily accessible, users increasingly need to combine them.
“For example,” says Lemmon, “a city council wants to combine aerial imagery with mobile vehicle imagery with terrestrial imagery in point clouds. Then they want to run analytics on the combined result to visualize where everything is located, take measurements, or generate profiles or asset maps. They need to be able to seamlessly combine different data types to generate the desired result.”
“We’ll continue to see manufacturers, including Trimble, combining these technologies in unique ways for specialized applications,” predicts Steiner. “For example, if you combine the Trimble V10 Imaging Rover with the Trimble R10 GNSS receiver while working with a total station, you’re using these unique, separate sensors in a combined way to be more efficient in the field. If you throw a UAS on top of that, the advancements are fairly astounding.”
To Brower, fusion is more than just combining different geospatial data types.
Feature extraction enables users to create rules to automatically extract from imagery objects, features or layers that they can then compare over time to identify changes.
“We have an advanced analytics lab that focuses on how to use social media to help you do tipping and queueing in imagery, both satellite and aerial,” he explains. “We also do a lot of work in activity-based intelligence. That’s taking multiple sources of information and pulling them together to start putting the whole image together. You’ll see a lot more of that kind of integration, so we can allow the analysts to stay in one environment but get everything they need to get their job done.”
“Many years ago, fusing a panchromatic image with a multispectral image was success,” recalls Legeer. “That’s now the norm. Now we have bigger challenges—different data sources, flying at different altitudes and at different angles. It used to be all data sources were stills, such as imagery and radar. Now we’re being asked to fuse together motion-based data.”
One exciting new capability is creating structure from motion or photogrammetric-based point cloud generation, thereby developing data sources from other data sources—for example, 3D point clouds from imagery and video.
“It’s a kind of fusion that we wouldn’t have thought about a few years ago,” says Legeer. “It allows us to create point clouds that open exploitation possibilities that aren’t available in still images.”
Esri Production Mapping supports enterprise cartographic workflows for national mapping.
Last year, Trimble released InSphere, a cloud-based platform for geospatial organizations to manage data, applications and services. Says Lemmon, “One of the key components built into InSphere is the Data Marketplace, which gives customers the ability to access aerial or remote sensing imagery and then use the imagery for analysis or import it into eCognition software for feature extraction or comparisons over time.”
“Two years ago,” Brower points out, “if you ordered a commercial satellite image, you expected a DVD to show up. Now we can assist that delivery by just putting it in the cloud and then, by using our GXP WebView solution, anybody with a Web browser can exploit that image.”
One rapidly expanding market for cloud services is UAS users. “Typically they’re not sophisticated photogrammetrists,” says du Plessis. “They haven’t invested hundreds of thousands or even millions of dollars in a camera. They want a quick, easy solution. We’ll be introducing cloud-hosted UAS solutions in which you simply upload your data, push a button and get back orthorectified image mosaics, point clouds and elevation models, plus hundreds of downstream analytical workflows.”
Photogrammetry with Consumer-grade Cameras
Advances in photogrammetry have made it possible to extract measurements from images taken with consumer-grade cameras. Esri’s native platform has the basic adjustments necessary for “nonrigorous photogrammetric solutions,” says Jordan. “Then we work with our partners to plug in a rigorous solution if that’s necessary.”
Hexagon Geospatial’s IMAGINE Photogrammetry software was designed to work with standard digital cameras. However, according to du Plessis, the cameras aren’t the issue.
“What we wrestle with is handling the stability of the new platforms to which these consumer-grade cameras are typically strapped,” he explains “You aren’t going to buy an airplane to hang your $50 camera out the window. You’re typically going to strap it to an unstable UAS platform. The traditional photogrammetric math doesn’t really work so well, so we’re looking into the fields of computer vision and techniques like structure from motion, which are far more applicable to an unstable platform without the camera’s position and orientation information.”
“We’re using inexpensive, consumer-grade cameras in our Trimble V10 Imaging Rover,” says Steiner. “The industry trend of really cheap high-resolution cameras has been driven a lot by the tablet and smartphone industry. We’ve been able to take advantage of some of those consumer trends to add value into professional solutions.”
“We’re seeing the rapid consumerization of the data collection market,” says Ahmed Abukhater, marketing director for Trimble’s GIS business area. “More field workers now have access to positioning technology for field geospatial data collection. We recently released the Trimble R1 GNSS receiver, which allows users, particularly in the GIS field, to leverage their own hardware—whether iPhones, other smartphones or tablets—and still be able to obtain sub-meter accuracy.”
“In our GXP software suite,” relates Brower, “we have something called InMotion Video that allows users to bring UAS videos, as well as GoPro-captured videos, into their environment. Such capability allows users to combine data from many sensors and put the whole picture together.”
Using a Trimble V10 Imaging Rover with an R10 GNNS receiver and a total station lets users combine separate sensors to be more efficient in the field.
The plethora of new remote sensing data sources and analytic capabilities have given rise to new markets for remote sensing and GIS, such as asset management and location analytics. According to Jordan, users of such applications are most interested in information products rather than raw pixels.
“If I just want a report on the health of my crops or need to know what percentage of my tax parcel maps are impervious or want to do some nontraditional information collection or data mining, I don’t necessarily want to see the raw data.” he explains. “I’d really like to have the information product derived from the pixels. So, increasingly, these things will become dynamic services, rather than static images or static classifications.”
“Remote sensing data or imagery, in terms of its use, is horizontal in nature,” says Lemmon. “Such imagery is in widespread use for oil and gas applications, environmental mapping, land cover mapping and things like that. As the resolution improves, people are looking at using it for higher accuracy applications, particularly terrain modeling.”
“The technologies that are becoming available are enabling new approaches to existing markets,” claims Steiner. “Some examples of this are applications of land, mobile and UAS technologies that now allow rapid data collection and deliverable creation for markets such as asset inspection and mapping, utilities, environmental solutions, etc.”
“The biggest emerging markets for us are precision agriculture, solar, oil and gas, and wind,” says Legeer. “They all require geospatial expertise to execute on their plans. These are emerging opportunities.”
DigitalGlobe’s addition of shortwave infrared in its WorldView-3 satellite greatly increases the usefulness of satellite imagery for markets such as mining, hydrology and bathymetry, because it allows users to identify materials with capabilities that traditionally were reserved for airborne multispectral and hyperspectral systems.
“We aren’t quite to hyperspectral in space yet,” says Legeer, “but the bands that are on DigitalGlobe’s WorldView-3 satellite are getting us closer.”
What are the current bottlenecks in integrating remote sensing and GIS?
“From a GIS standpoint, customers need an end-to-end solution rather than having to deal with disparate software solutions that don’t talk to each other,” says Abukhater.
From the hardware side, adds Steiner, “one of the bottlenecks that will continue for some time is the ability to move data around. So we’re developing cloud-based solutions that allow users to access data live all the time everywhere.”
Legeer and du Plessis agree the amount of data is still a bottleneck. “It’s being solved by the cloud,” says Legeer, “but we still need to make sure that where we process on the cloud is close to the data.”
The movement to the cloud will reduce many of the hardware bottlenecks, claims du Plessis, “because you don’t have to have hundreds of gigabytes of RAM on your machine any more.” The bottleneck, he says, “is making the software decide it needs to run a new routine based on sensor input it got automatically.”
For Brower, the bottleneck is content management. “When you pull up a spot on Earth, how do you know you’re looking at all of the resources available there for you? One of our commercial solutions, GXP Xplorer, is a knowledge management solution for GIS and remote sensing professionals that allows you to pull together all the parts. If you have an image from DigitalGlobe, Planet Labs, Skybox or whoever, you can pull the image into one central view to work with the data.”
As they continue to advance, hardware and software often leapfrog one another. Currently, when it comes to remote sensing and GIS, it’s software’s turn to leap over hardware.
“When I was working with low-resolution Landsat data in the 1970s,” recalls Jordan, “you could calculate maximum likelihood with traditional algorithms. With high-resolution data like that from WorldView-3, you need smarter algorithms.”
Origional post can be found at: http://eijournal.com/print/articles/developing-the-map-of-the-future