Making Sense of Post-Disaster EnvironmentsSeptember 28, 2012 By: Melanie Martella, Sensors
I got my first exposure to LIDAR (Light Detection and Ranging) at a tradeshow, where one of the manufacturer demos was a glorious false-color point cloud generated from high-accuracy LIDAR measurements of the ornate front face of an ancient cathedral. LIDAR is an extremely useful tool for many types of surveying, including (as it turns out) mapping post-disaster environments and supplying valuable information about conditions on the ground to those responsible for disaster response.
As explained in Susan Parks' article in Imaging Notes, "Disaster Management using LiDAR", LIDAR enables very accurate surveys of large areas and takes less time than using traditional survey methods. If you happen to have before and after LIDAR surveys, you can generate maps showing exactly what's changed. For instance, University of California, Davis researchers who used pre- and post-earthquake LIDAR surveys to visualize a northern Mexico earthquake zone to understand how a series of small faults resulted in a major earthquake. Or, you can read the report (PDF) issued by New Zealand's National Institute of Water and Atmospheric Research (NIWA) on the effects of both the September 2010 and February 2011 earthquakes on the Avon-Heathcote estuary in Canterbury, New Zealand. In this case, the researchers used aerial photography, LIDAR, and RTK GPS survey data to figure out how the estuary had changed. Or, for a slightly less academic discussion (and prettier pictures), you can read the USGS article "Start with Science: Hurricane Isaac, Weathering the Storm & Understanding Isaac's Impacts" which talks about the various tools, including LIDAR, used to assess how Hurricane Isaac affected the Gulf Coast states.
This is pretty nifty, right? Sure, you end up with very large data sets, but those very large data sets give you lots of meaty information about what's moved and how much. So far, though, these stories have involved either airborne LIDAR or LIDAR on a fixed platform. What happens when you attach LIDAR (and a bunch of other sensors) to a person who is trying to map an area? That's the current problem before the MIT Researchers discussed in Jack Clark's ZDnet article, "MIT employs Kinect and lasers in real-time mapping gear for firefighters". I'd suggest taking the time to read their research paper linked here (PDF). For this particular application, the LIDAR data is combined with information from a stripped down Kinect RGB-D camera and inertial and barometric sensor data to create a map of an indoor environment. The technique they're adapting, Simultaneous Localization and Mapping (SLAM) is used by autonomous robots to map their environments and to track their location within the mapped environment. This is tricky to do when the sensor package is on a flat surface and moving in a controlled manner; it's even harder to get the mapping to work if the sensor package is attached to someone climbing over a pile of rubble or a sloping floor, or if that someone moves to different floors. Did I mention that it's also providing the data in real time?
In the consumer device arena, a great deal of work has gone into fusing together data from a smartphone's sensors to more accurately track user movement to provide improved pedestrian navigation when GPS signals are lost. This application is far more challenging (let's take something hard and make it harder!) but it promises to be a powerful tool for first responders, allowing them to understand and navigate complex environments.
Most Read Articles