Design Considerations for a Distributed Test SystemAugust 1, 2006 By: Faya Peng Sensors
When you're setting up a test system, you'll need to pay attention to measurement functionality, instrumentation bus capability, and software.
As users have benefited from distributed processing to separate tasks and enable higher performance for tasks in computing systems, many are applying similar concepts to test systems. Some applications, such as systems with equipment residing in multiple locations or systems that have limited room such as in-vehicle tests, physically require a distributed system. These systems have the additional benefit of increasing your processing power and I/O bandwidth.
To successfully implement a distributed system, you need to consider three factors: required measurement functionality, capabilities of the instrumentation buses, and software.
When selecting an instrument, first make sure it meets your requirements in terms of accuracy, sampling rate, frequency, dynamic range, etc. Stand-alone instruments provide a fixed set of functions the vendor has defined for the system. Modular instrumentation systems take advantage of software processing to provide additional measurements based on information provided by the modular hardware. The software-defined approach affords greater flexibility than vendor-defined box instruments and, because it is built on PC technologies, has advanced in capability at a much faster rate.
Key to Abbreviations
The Instrumentation Bus
The instrumentation bus affects your ability to meet distribution, locality, and performance needs. When designing distributed systems, many people first think of distribution needs—physical requirements, remote locations, and connectivity. For example, your test application might require a distributed system because it has to satisfy packaging size and ruggedness constraints. For in-vehicle testing, you might select remote control of a small PXI or NI's CompactDAQ system (www.ni.com/dataacquisition). This component could be located within the vehicle, and the rest of the system connected locally to the computer interface.
Other applications might require components at various locations, and their data would need to be correlated and synchronized. For instance, creating an acoustic map to characterize the noise made by a plane flying over a designated area would require a distributed test system. You could solve this application using multiple remote PXI systems that are controlled and synchronized from a host computer. For controlling remote systems, you need to consider the programming connectivity options available and their reliability and ease of use. When creating a local system to control tests, review results, and log data from a remote machine, you should evaluate the protocols to find the one that best suits your needs. For instance, NI LabVIEW supports a variety of protocols including TCP/IP, HTTP, and remote panel connection that allows you to control a program through a Web browser. Evaluating how the instrumentation bus supports physical restrictions, synchronization across different locations, and remote connectivity should be factors in creating a distributed system.
Locality refers to placing measurement components physically close together and is determined by considerations including control-loop rates, synchronization, and throughput. For instance, one of the major factors of a control system is the rate to complete a cycle of acquiring, processing, and, in response, outputting a signal. Although the computer processor and the number of instructions limit maximum loop frequency, you can achieve faster control-loop rates through hardware timing and triggering. In addition, many applications require hardware synchronization to ensure that measurement signals are properly correlated. Due to clock drift and start latency, software synchronization does not provide timing accurate enough for many applications. Through hardware synchronization, however, you can ensure that signals are synchronized up to the sub-nanosecond level. Critical hardware synchronization features include a reference clock that synchronizes all peripherals and trigger lines to allow parallel event handling. Triggers are essential for synchronous events such as starting two devices simultaneously as well as for asynchronous events such as handshaking and sequencing.
Another locality consideration is bus throughput. Higher throughput rates are important for applications that send large amounts of data or require data streaming among components. The ability to meet locality needs is largely dependent on bus capabilities.
Most Read Articles