I've been doing a bunch of reading on sensor-enabled contextual awareness recently, and although it's easy to get all ooo-shinied by the kinds of neat things achievable with contexturally aware mobile devices for consumers; better pedestrian navigation, better location-aware services, and a slew of productivity/health management tools, I'm also interested in how people are enabling contextual awareness more generally.
Before I get there, however, a little bit of commentary on the topic of making smartphones smarter for consumers. Thomas Husson of Forrester Research wrote an interesting post at mocoNews.net about the future of mobile devices. In "A Sensor In Your Pocket: The Future Of Mobile Is User Context" he spells out why he thinks (based on a recent research report on the topic) that mobile devices such as smartphones will move beyond their ability to act as (very small) but powerful PCs to enable behavior that a PC is incapable of. Powered by the various sensors built into them that allow them to learn different types of information about the environments in which they operate, the mobile devices can provide their users with context-aware functions and services.
I don't disagree with any of this, by the way. Sensors act as artificial nerve endings, allowing electronics to detect and react to changes in the environment. That's the same general idea whether you're talking about the navigation function on a smartphone, or the engine control or safety systems of a high-end automobile, or the SCADA system for a refinery. The differences lie in the environment in which the system exists, the types of sensors used, and the types of information we want to know. It's also a matter of how difficult it is to correlate the sensor data acquired with the information about the environment in which you're most interested. For instance, if you're doing condition monitoring, you're probably going to be most interested in vibration signatures to spot early signs of bearing wear or imbalance in moving parts. If you're doing sensor fusion for personal navigation, you'll be incorporating data from motion sensors of various types, maybe an altimeter reading, probably GPS information as well, to generate an accurate idea of where the person is and how the person is moving, correlated with the position on a map. Neither task is easy but the degree of difficulty depends on what you're trying to do and how well you need to do it.
When we're talking about contextual awareness for smarphones, mostly we're talking about using the sensors within the device itself. What kinds of functionalities do you get if the mobile device interacts with other devices built into the environment around it? What kinds of functionalities do you get by adding accessories? If you want your mind blown, check out Hal Abelson's Educause Quarterly article, "Medical Labs in Our Pockets" which has a list of smartphone-based medical projects, some of which convert a smartphone into a useful medical diagnostic tool—such as a microscope and a device to assess eyesight problems—by the addition of low-cost accessories.
If we take contextual awareness to mean a computer system that amends its behavior based on its environment, then we're getting some basic contextual awareness in the more advanced building automation systems that adjust the lights, heat, and cooling based on whether there are people present and what the ambient environment is like. Our cars are obtaining a measure of contextual awareness, too. Their climate control systems can adjust the cabin conditions based on how many people are present, whether it's sunny or dim, and whether we're driving through lots of smog; the safety systems can adjust airbag activation based on whether a passenger is present or not; and that's before we get to driver assistive technologies such as lane departure warnings and stability controls that adjust to road conditions. With sensors making such inroads into so many of our environments, it's no wonder that sensor-enabled contextual awareness is such a vibrant research topic.