It's Time for Sensors to Go Wireless; Part 1: Technological UnderpinningsApril 1, 1999 By: Wayne W. Manges, Stephen F. Smith, Glenn O. Allgood
Trends are moving us toward the integration of wireless communications with sensors. The important question is when should you begin the transition to these systems.
Many forces are drawing sensor manufacturers to wireless technology. The explosive growth of the personal communications market has driven the cost of radio devices down and the quality up. At the same time, the expenses associated with installing, terminating, testing, maintaining, troubleshooting, and upgrading wiring continues to escalate faster than facility managers would like. The Federal Aviation Administration recently announced that aircraft wiring would no longer be considered a lifetime, fault-free techno-logy. Not surprisingly, the details of the cost equation depend on the application. With wire in some specialized installations approaching $2000 per foot, the attractiveness of a wireless system needs little reinforcement.
In this article-the first of a two-part series-we'll provide information that will enable you to make sound build/buy decisions and to choose appropriate architectures and implementations. This article presents the background and technological underpinnings of wireless systems.
Photo 1. The nose-on-a-chip is a MEMS-based sensor that can detect 400 species of gases and transmit a signal indicating the level to a central control station. The wireless sensor was developed at Oak Ridge National Laboratory.
Point-to-point wireless systems have been available in the instrumentation world for many years. The first architectures that supported wireless interconnects involved simple wireless modems capable of transmitting packets of data across a room (or a field) using standard RS-232 interfaces on both ends. Many of these early systems used digital modulation techniques across standard narrow-band frequency modulation channels. But restricted band allocations severely limited the number of devices that could operate simultaneously in a common area. More sophisticated modulation techniques and emerging standards are beginning to address these and other problems common to early systems. True wireless networks are becoming feasible through the use of advanced techniques.
Over the last 50 years, innovations have provided the basis for new thinking in the industrial application of radio telemetry. But innovations alone cannot bring wireless solutions to the industrial marketplace. The economics of Moore's law continue to teach us lessons about the relationships among technology, marketing, and quality. The market forces driving wireless telephony technologies continue to offer components that exploit new technology at astoundingly low prices and high quality. The second article in this series will discuss Moore's law in greater detail and explain how it applies to sensors, measurements, and controls.
Why Wireless Sensors?
The industry is moving toward the implementation of networks of wireless sensors that can operate in demanding environments and provide clear advantages in cost, size, power, flexibility, and distributed intelligence (see Photo 1, page 10). Architectures for sensor networks have been changing greatly over the last 50 years. We've all worked with measurement schemes in which individually wired sensors output a 4–20 mA signal that represented the parameter being measured. The cost and complexity of such wiring prompted many to embrace bus architectures when they became available in the 1970s.
|Figure 1. The long runs of wires necessary for 4–20 mA designs have given way to the bus and network architectures of today. The distributed intelligence supported by the concentrators in these architectures not only reduces the wiring but also the required communication bandwidth. Intelligent wireless sensors will bring a new reduction in wiring; more opportunities for distributed intelligence; and with peer-to-peer sensor networks, improved reliability and robustness.|
Bus and network topologies significantly reduced the required wiring and provided an opportunity for hierarchical architectures that supported distributed intelligence at the unit and factory floor levels. Smart sensor standards-by supporting higher level interfaces and allowing information abstraction and object-oriented approaches-are also providing new opportunities for distributed intelligence architectures. Many of the standards allow individual sensors to interface directly with a digital bus.
But these standards haven't eliminated the need to wire individual sensors to a concentrator. And this architecture introduces a single point of failure-the digital bus connecting all the sensors. Alternatives provide redundant bus architectures, but cost and complexity escalate significantly as the number of required connections increases. As shown in Figure 1 (page 12), a wireless network of intelligent sensors eliminates that failure mode and provides peer-to-peer communications so that cooperating-sensor implementations can be cost effective.
New sensors and actuators based on microelectromechanical systems (MEMS) are coming out of laboratories around the world and providing solutions in specific applications. Many automotive air bag deployment systems have been designed to use state-of-the-art MEMS-fabricated accelerometers-potentially the most widespread application of MEMS to date. And a new generation of ink jet printers uses print heads fabricated with MEMS techniques. Attaching wires to these miniature devices can be problematic and introduces failure modes that could be avoided with wireless designs.
The Center for Intelligent Sensors in Germany builds miniature sensors based on multichip modules. Photo 2 shows a submicron position sensor designed to measure the location of material during assembly. The ribbon cable illustrates the need for a wireless interface both to reduce the probability of failure and to maintain the low mass of the sensor for specific deployment scenarios. Many new miniature sensors are designed with low-mass, low-footprint electronics but require cables to be attached. Wireless implementations will eventually make these devices even more versatile. Oak Ridge National Laboratory's (ORNL's) nose-on-a-chip is a MEMS-based sensor that can theoretically detect as many as 400 species of gases and wirelessly signal the level in parts per billion (ppb) (see Photo 1). Tests at ORNL have confirmed ppb sensitivity for mercury in air with a wireless signal transmitted to a PC receiver for readout.
A self-contained wireless temperature sensor is shown in Figure 2 (page 14). Tests conducted onboard the Navy's USS The Sullivans showed that the device could reliably transmit the temperature of the environment over three decks and under typical shipboard EMI conditions. Figure 3 shows the functions implemented in the single-chip wireless temperature sensor. The area on the lower left was designed to be sensitive to a specific frequency of an IR signal. The communications protocol implemented on the chip allows a TV-style remote to be used to program the sample rate, analog gain, and other parameters. The potential for cost reduction, unprecedented flexibility, and power reduction is evident in these single-chip wireless systems.
The first (and to date only) commercially available, fully integrated wireless sensor was announced by Computational Systems, Inc., in the fall of 1998. CSI, now a wholly owned subsidiary of Emerson Electric, can be reached at www.compsys.com and is located in Knoxville, Tennessee. The self-contained device is part of CSI's program for monitoring the health of rotating machinery and measures temperature, vibration, and other important parameters. The intelligent sensor performs an FFT on the raw acceleration data, alarms on off-normal conditions, and transmits only the small amount of data necessary to update the host on the status of the equipment.
|Photo 2. The fragile wiring on miniature sensors, like the submicron position sensor illustrated here, makes the case for why wireless technology will become more important as sensors shrink.|
CSI's robust, consistent, distributed architecture allows the customer to add new sensors to the network as they become available. Because CSI has elected to maintain proprietary interfaces, new sensors will only be available from CSI or from licensed partners. Open standards are meant to provide end users with options from a wide array of suppliers who opt to comply. The battle between open and proprietary standards will play out in this market as it has in other markets before.
The constantly reduced cost of computational power lends itself well to the distributed architecture described above. Embedded intelligence reduces the bandwidth required in the communications path. Customer acceptance of wireless technology, led by the wireless phone market, is likely to spread to industry much like the ubiquitous personal computers that have penetrated the industrial markets.
Trends are already evident that encourage increased use of sensors, software, and controls to bolster a company's competitive advantage. When the factory floor data systems become a sustainable competitive advantage (rather than an expense to be managed), companies will demand continuously increasing performance and reduced cost-Moore's Law! New sensor companies (or existing companies that reinvent themselves) will likely emerge to supply low-cost, high-performance, easily deployed devices that will change the way end users view sensors and sensor systems.
|Figure 2. Integration of the sensor, signal conditioning, and telemetry on a single chip can overcome the high cost of wiring in distributed measurement applications.|
The development and deployment of wireless sensors depend on the convergence of spread spectrum radio, new code division multiple access (CDMA) techniques, error detection/correction, and new techniques for mixed-signal (A/D) IC designs. Bringing these technologies to bear on sensor needs requires an interdisciplinary approach to design that many organizations have been unable to implement. Some organizations have successfully developed components for wireless sensors but haven't been able to produce an integrated sensor that meets operating parameters necessary for real-world use.
The disadvantages arising from the use of wireless technology in measurement applications have relegated wireless solutions to niche applications, at best. The disadvantages have been serious enough that most users have either avoided the temptation to pursue a wireless approach or, having suffered the consequences of the shortcomings, vowed never to try again. Problems identified in many of the early attempts include:
|Figure 3. The U.S. Navy tested the Telesensor to determine how effectively the intelligent wireless sensor could monitor temperature throughout a ship. The device reliably collected and transmitted data over three decks of the ship and resisted shipboard EMI. The diagram shows the various components of the sensor.|
- A lack of robust connectivity
- Inadequate data rates
- Inadequate data security-wireless systems did not offer features that could prevent unauthorized interception of signals and data
- Limited extensibility-the technology lacked the ability to dynamically reconfigure existing and new sensor nodes in the network
- High cost-fabrication costs of radio transceivers meant that only high-value applications could be addressed
- EMI susceptibility-manufacturers could not produce the complex circuitry that ensured accuracy and implement immunity to nearby RF generators
- Overly complex logistics-deploying wireless devices required detailed logs and licenses
- Time synchronism-local synchronized time was not available for correlating cross-sensor data
- Short battery life-tetherless devices required batteries that had to be changed or recharged frequently
Solving these problems so that wireless sensors can be used to address market needs requires several technologies that have been known and understood for quite some time but whose costs are just now becoming competitive.
Spread Spectrum. Not all spread spectrum techniques are created equal. Spreading the energy of the communications signal over a wider range of frequencies can be accomplished in a number of different ways. The original patent described what is known as frequency hopping spread spectrum (FHSS). Direct sequence spread spectrum (DSSS) uses a related but different approach to spread the signal. Ultra-wideband techniques spread the signal over very large frequency ranges. Each technique has advantages and disadvantages under the various conditions that might be encountered in a typical measurement application. In each case, the key is to make certain that the transmitter and receiver can lock in quickly and synchronize the spreading and despreading actions.
FHSS is conceptually simpler than the other techniques because most of us are accustomed to thinking in terms of frequency domain signals. The basic idea is that the signal is transmitted on a series of frequencies allocated for this purpose. The signal jumps from frequency to frequency at a predetermined rate and sequence. Because the sequence is pseudo random, unauthorized listeners won't know which frequency is next.
As the chipping rate (i.e., the number of changes per bit) increases, the security, noise immunity, and robustness increase. The chipping rate is a critical performance parameter of any digital, spread spectrum technique. The math shows that the process gain for chipping is 20 db log (chip rate). Because RF gain is usually a function of transmitter power, this is a critical power saving strategy, which extends battery life considerably. Chipping can't help in a white (Gaussian) noiselimited environment, but most real-world (as opposed to outer space) environments are limited by colored noise coming from nearby terrestrial sources.
DSSS distributes the signal over a range of frequencies by exclusive-oring each bit in the data stream with a high-rate, pseudo-random spreading code. Because the receiver has a copy of the code, it can recover the signal by reversing the operation. The 64-bit spreading code provides a robust and secure interface for the measurement. Some DSSS implementations use smaller spreading codes, but these result in less process gain and a less robust connection. This technique is related to chopping techniques that have been used for years to stabilize analog signals for use in feedback control systems.
Multipath interference occurs when a signal is reflected (and therefore time delayed) and appears at the receiver as an interfering signal. Automobile FM listeners hear this as the swish, swish that occurs in areas where reflections are common (e.g., an area containing hills, buildings, or metal towers).
But by spreading the signal over a range of frequencies, multipath interference is mitigated. The mathematics of spread spectrum transmissions show that as long as the time delay is greater than the chipping period multipath interference can be beneficial because alternative (i.e., reflective) paths can ease the line-of-sight requirement common in most radio telemetry applications. Only near-field, multipath interference is a problem. But because the frequency is constantly changing and the interleaving techniques ensure that chips are not sent on the same frequency in each bit, the actual interference caused by near-field, multipath transmission can be reduced to tolerable levels.
Code Division Multiple Access. If you effectively use the spread spectrum techniques outlined so far, you can use a channel-sharing strategy different from the time domain multiple access (TDMA) and frequency domain multiple access (FDMA) techniques. In TDMA implementations, each transmitter is allocated a time slot during which it is authorized to transmit. This implies that new devices must somehow fit into the existing sequence or a new configuration is necessary. FDMA requires that each transmitter stay on the frequency allocated. Extensibility to new devices is problematic without a carefully planned frequency allocation strategy.
Code division multiple access (CDMA) lets your wireless system support new transmitters without hard limits or a priori knowledge of how many will be required or how each of the transmitters will be configured. As long as the spreading codes are uncorrelated (i.e., orthogonal), each unwanted signal appears as random noise at the receiver and can be removed. The only limit is when the noise floor is elevated to the point where you can't reliably extract the signal.
Research at Virginia Tech's Mobile and Portable Radio Group shows how these limits play into real implementations. The Virginia Tech Web site is located at www.mprg.vt.edu and contains valuable information about wireless technology.
Mixed Signal Design. As a rule, you cut your expenses when you implement a function on a single IC. To a large extent, the cost of a viable implementation determines the component count. Most sensors still consist of multiple components. The integration of the components into subsystems and finally functional blocks has kept the cost of sensing and measurement systems relatively high while the cost of other electronic building blocks has plummeted.
The technical challenge for those creating sensor subsystems has been to integrate the transducer, signal processing, digitization, information abstraction, communications, and power in the same easy-to-manufacture package. Recent work at ORNL has demonstrated the feasibility of building integrated wireless sensors. Figure 2 and Photo 1 illustrate some examples of this approach.
The chip in Photo 1 contains 400 distinct sensor elements configured for readout through onboard electronics. When the sensor array was interfaced with the single-chip, spread spectrum, wireless transmitter, an ORNL team led by Chuck Britton demonstrated the feasibility of a new breed of sensors that are highly integrated, accurate, and can transmit their readings in engineering units compatible with existing data acquisition standards.
Ultimately, a design could incorporate many thousands of these sensor elements, including redundant devices, reference standards, and nulling sensors. The resultant measurement would be orders of magnitude more accurate and orders of magnitude less costly than current competing measurement devices.
The economics of this approach make this desirable. Mixing analog, digital, and radio frequency circuits on the same standard CMOS device presents a number of technical challenges that are just now becoming well understood.
Fully integrated electronics offer the additional advantage of consuming less power. Sound design practice and intelligent power management strategies are extending the battery life to intervals reasonable for plant-floor applications.
The work at ORNL's Instrumentation and Controls Division illustrates one approach to providing ubiquitous, low-cost wireless sensors. Information about our wireless work can be found at www.ornl.gov/orcmt/ wireless or can be obtained by contacting one of the authors directly. Other organizations have taken different approaches and can be found on the Web with little difficulty.
When evaluating alternatives, consider inherent sensor accuracy, robustness, power demand (battery life), and ease of implementation. A classic measurement for a communications channel is the bit error rate (BER). By checking the uncorrected BER, you can ascertain the robustness and likely failure scenarios for a given implementation. The tradeoff between BER and data rate is well understood because error-correcting codes must be added to the data stream to accommodate potential errors in the transmission. The better the uncorrected BER, the better the potential for getting the data through.
Many have speculated that the Next Generation Internet will be much more sensory interactive than the current Web. Adding the numbers of sensors necessary to address this demand will bring the sensor business to new paradigms. Preparing today to make this transition is critical to the long-term success of an organization.
Existing and emerging standards are making it easier for the transition to ubiquitous wireless sensors. The IEEE 802.11 allowed wireless Ethernet connectivity, giving people their first view of wireless connectivity to the Web. The number of requests for Internet addresses continues to grow exponentially as more and more devices are becoming Internet accessible. The IEEE 1451 smart sensor standard is making it easier to interface sensors to the network. Extensions to 1451 are now being proposed by IEEE committee members to support wireless sensors that can be instantly accessible over the Internet-with controlled access, of course.
The next article in this series will address more of the economic issues associated with wireless technology. We'll lay out a roadmap that you can use to determine appropriate applications for early success. We'll also examine more of the emerging trends to determine how they might affect decision points and arguments for organizational commitments to this technology. We will suggest critical partnership criteria to support decisions related to the build/buy/subcontract dilemma. The important issue here is for each organization to set a path that is comfortable and potentially profitable for it as the wireless revolution sweeps through the sensor, measurement, and control arenas.
- ABC News broadcast announcing Federal Aviation Administration decision on aircraft wiring. October 1998.
- S.T. Picraux and P.J. McWhorter. December 1998. "The Broad Sweep of Integrated Microsystems," IEEE Spectrum: 2433.
- C.L. Britton et al. March 2, 1998. "MEMS Sensors and Wireless Telemetry for Distributed Systems," Proc of The SPIE Fifth International Symposium on Smart Materials and Structures, San Diego, CA.
Most Read Articles