In recent environmental monitoring news, IBM and the Beacon Institute are designing and implementing an extensive collection of sensors and monitoring equipment in the Hudson river in New York state. And they aren't just instrumenting part of it–the plan is to monitor all 315 miles. No mean feat!
As far as I know, this will be the largest distributed sensing network for environmental monitoring yet created. We've seen reports of smaller sensor networks being used to monitor wildlife habitats, watch for and track sharks in Australia, and keep an eye on the snow-pack in the Sierras but creating a distributed network over an entire river? That's a pretty impressive new twist.
"Networking the Hudson River" from the Technology Review gives more details of the River and Estuary Observatory Network project and its plan to use both tethered sensors and autonomous underwater robots to gather data. The part I considered most important is the fact that the project will require a much larger DA system with more advanced data gathering and processing capabilities to amass and assess the variety of data produced.
IBM is developing both the distributed DA hardware and analytical software to extract meaning from the disparate data streams in real time. That's an important and new distinction—for years we've had software that's designed to gather and manipulate data, but we've always relied on humans analyzing the graphs or results to assign meaning. The Hudson river project is unique in that it will be implementing some decision-making in the system itself–if some monitored parameter changes, the system shifts more resources to that particular data stream.
At first, sensor networks existed to shift data from the measurement point to some other location for storage and analysis. As these systems have matured (we now know how to build them and make them work well and, as a result, we're seeing more and larger deployments), we'll see more of a shift from data transport to in-situ data analysis. For some situations, we don't need to know the details of the day-to-day operations unless something changes and we need to take corrective measures. Why not have the system monitor conditions and alert us when necessary?
This project also highlights the importance of software to integrate data from multiple, very different sources, to create a harmonious and useful big-picture view of what's happening. We're starting to see this in industry, with SOA-based enterprise data integration products. Good luck to the people working on this project, I can't wait to see the results.