In the past, the devices making up sensor networks received power through the hard-wired infrastructure; managing this essential operational resource meant simply guaranteeing clean, constant electrical energy. If the electrical system was in good working order, the network’s needs were met. Wireless installations, however, are challenging traditional solutions. Although an estimated 80% of current wireless systems do not require off-the-grid power sources, the industry is increasingly moving to truly untethered sensors, which rely on batteries as the energy supply. As a result, power management is a critical issue and the methods and technologies used in the process have become much more complex. In the broad sense, power management means optimizing performance to minimize energy consumption. Ultimately, this approach will extend the battery life of a sensor system to be as long as the end user requires, enabling autonomous sensing systems that don’t rely on main power distribution.
A Holistic Approach
To attain the best system performance, therefore, you must optimize the performance of every component—the sensors, microcontrollers, and radios—to consume as little power as possible while still meeting the requirements of the application, in terms of data throughput, latency, and reliability
“Power management touches the process almost everywhere,” says Rob Conant, co-founder and vice president of marketing and business development for Dust Networks. “It is incredibly important for wireless sensor networks because in many of these applications the battery-operated devices are supposed to last for years without user intervention. Traditionally, that has been a huge challenge. There aren’t many devices that are battery operated and function for years at a time. So power management techniques are incredibly important, and they have an impact on almost every aspect of the design.”
Each component manufacturer contributes to power management by providing the most efficient component possible. But optimal power management is implemented by taking a holistic view of the wireless sensor network, and the person responsible is often the system integrator, who has knowledge of individual components, as well as the applications and working environment. “You have to consider all these factors to put together a complete power management system,” says John Suh, senior application engineer for Crossbow Technology. “The sum is greater than its parts.”
Power management in sensors is achieved through hardware design and software functionality.
Hardware specifications define the current required to operate the sensor and the efficiency with which it performs certain processes. For example, when you turn on a sensor, there is a transient state during which time it has to stabilize before you get an accurate reading. The quicker a device stabilizes, the faster it can take a measurement and return to sleep mode, consequently using less power (Figure 1).
Figure 1. A timeline of events that shows one of Crossbow Technology’s motes, programmed with a sensor firmware using the company’s XMesh low-power network routing protocol
On the other hand, software enables power management in the sensor by controlling duty cycling: turning on the sensor, taking a measurement, and putting the sensor back into a low-power sleep mode. Controlling the duty cycle is the most common method of implementing power management in a sensor. Setup and configuration of the process are usually done through a graphical user interface (GUI).
Choosing the most appropriate sensor can also play a big role in managing power consumption. Are you using low-power sensors? Can you sense the property of interest in a different way, using a type of sensor that consumes the least amount of power? For instance, you could use either an accelerometer or a tilt switch to monitor tilt. But the switch is a simpler device that uses less power than the accelerometer.
Your choice of microcontroller is as important as your sensor choice. Microcontrollers are integrated circuits that incorporate a central processing unit, memory, a timing reference (clock), and I/O peripherals, all on the same chip. Because the microcontroller makes the decisions and calculations in sensing and networking applications, power management is implemented both in its silicon and by the software that it runs.
To achieve low-power operation of a microcontroller, the unit’s design must allow software to turn on or off various parts of the silicon as they are needed so that it uses the minimum amount of energy. The silicon must have the capability to respond to the commands of the software.
The microcontroller also reduces power consumption by using multiple clocks to control the varying operations it performs. “A lot of companies compartmentalize circuitry, so you have separate clocks for the things that have to run at high speed and those that can run at slow speeds,” says Wayne Manges, co-chair of ISA’s SP100 industrial wireless standards committee and director of Oak Ridge National Laboratory’s Extreme Measurement Communications Center. “You try to compartmentalize the high-speed functions. Rather than run a whole chip at high speed, you have multiple clocks because high-speed switching uses power. This is on-silicon power management. It controls the computational speeds, the switching speeds.”
You can also enhance microcontroller power management by optimizing the software the unit runs—the operating system and the decision-making logic. By minimizing the code size, the device’s manufacturer reduces the number of cycles it takes to execute operations. This increases efficiency and reduces energy consumption.
“What I preach when I talk about low power is that you always want to spend as little time as possible with your CPU executing code and as much time as possible in low-power modes, with the system powered down,” says Scott Pape, Microcontroller Div. systems engineer for Freescale Semiconductor. “In that respect, minimizing the amount of code that you execute every time you are awake allows you to spend as little time as possible with the CPU active.”
In addition, you can squeeze more energy efficiency from the microcontroller by using a number of strategies.
Duty Cycles. For example, the microcontroller controls and regulates the duty cycles of the sensors and radios. After determining how often measurements have to be updated and transmitted, you can set the wake-up rate of the sensor and radio to the minimum allowed by the application, prolonging the period in which the respective devices are in sleep mode, thus reducing power consumption.
A Nonserial Approach. You might think that the logical sequence of events would be to take a sensor reading, get your measurement, turn on the radio, and wirelessly transmit your data. But many radios require a certain amount of time to warm up. Therefore, a better approach would be to turn on the radio at the same time you turn on the sensor. That way both can warm up in parallel. You get your measurement, take your results and do any calculations that are required, and then pass on the data. By the time you are ready to send data back to the router and the main host, the radio is ready to go. “It’s all about reducing the amount of time the whole system is active and getting it back into a passive, low-power state,” says Pape.
In Situ Processing. Here is another scheme in which the microcontroller enables the overall reduction of energy consumption. In the in situ approach, the microcontroller performs a high level of processing so that the radio can transmit less data. Consider the following example:
“A professor from Yale University is doing research on camera sensor networks,” says Crossbow’s Suh. “When people think of camera networks, they think of digital camera or camcorder raw images. And that’s a lot of data to transmit. But he is looking at the image itself as a source of data and then using a microprocessor to reduce the data set to something very simple, such as whether a person is in the room or not. So you take all the data from the camera and reduce [them] to 1 bit—yes or no.”
The advantage gained by using this approach is based on the fact that the microcontroller’s processing consumes less energy than the radio does in transmitting data. “For the most part, that is generally true,” says Suh.
The primary embodiment of the radio’s power management is the protocol. From a system design perspective, a lot of effort has gone into the development of protocols (e.g., mesh standards) that provide networking capability with the lowest possible power consumption. These capabilities are measured in terms of reliability, latency, and routing—all the while sipping as little power from the battery as possible. But each protocol is the result of trade-offs, balancing application needs, such as reliability, with the desire to hold down energy consumption.
For instance, if a radio is going to transmit a message, you have to choose between the reliability of message transfer—how many times the radio will transmit the message to ensure it gets through—and the energy consumed. In general, the more the radio transmits or receives, the more energy it uses. Given that, when does the radio switch to a new neighbor to forward its packet? Should it store the data in its memory buffer or simply drop it? When you select a protocol, you decide on the balance struck among these elements. It is a tradeoff between message delivery reliability and battery life. If you want high reliability, the radio has to use more battery power to achieve that.
Another question arises: Does the radio’s protocol call for the minimum amount of energy to transmit the packets? “If you look at the spec sheets of different manufacturers, the energy to transmit and receive can vary 5% or 10%,” says Suh. “That is a number they need to do better at. The second thing is their product’s sleep mode. When the radio is asleep, they should be able to bring the energy consumption down to 1 µA or less. It boils down to how much energy does it take to transmit or receive packets.”
Protocols also provide low-power listening strategies: How many times does the radio wake up and listen to the RF environment to determine if a message is coming in? If there is nothing, it goes back to sleep. If there is a message, the radio quickly shifts to the receive mode, accepts the packet, processes the data, turns to the transmit side, and forwards the data back to the network. But even here trade-offs are made. “You need to balance the overall power usage of the radio with the quick-start and warm-up periods,” says Freescale’s Pape.
To take the listening strategies one step further, several companies have developed time-synchronization protocols for the network level. The idea is to give each device on the network a window within which it can communicate.
Figure 2. Dust Networks’ Time Synchronized Mesh Protocol, which enables low power consumption by allowing nodes to turn on just milliseconds before a scheduled transmission and then go back to sleep
“In our product, the entire protocol has been developed so that the radios can be off the majority of the time—not listening, not talking—to make it so they can effectively keep the radio off the majority of the time,” says Dust’s Conant (Figure 3).
Figure 3. Power management module in a Freescale RF data modem, which optimizes power consumption in the radio
“They need to know when their neighbor is going to want to communicate. So they have very accurate time-synchronized clocks that allow the radio to turn on, listen to its neighbor, and go back to sleep again within a few milliseconds. Because the timing has to be so tight, that puts some constraints on the overall system, which is what allows us to get down to extremely low power consumption.”
It is also the sort of creative design approach needed to extend the battery life of a wireless sensor system. The more manufacturers innovate, the better you can balance your system and the closer the industry draws to breaking the traditional reliance on main power distribution.
Tom Kevan is a freelance writer specializing in technology. He can be reached at [email protected]