In the second part of this article we will discuss how cooling capacity and controls affect thermal performance. Part 1 addressed the variables in a test environment and the criteria for matching chamber size to the device under test (DUT).
The specified temperature range, thermal cycling, and duration of test will determine the minimum requirements for chamber performance. If the spec calls for testing at -40°C, for example, clearly you need a system that can achieve at least that temperature for a passive DUT. However, that system may take a very long time to get there or not at all if the device has an active load. We'll discuss temperature transitions below.
The cooling method, system controls, and enclosure design will each factor into optimizing the efficiency and speed of test outcomes. Specifying a thermal system is often driven by how demanding is the test specifications, as well as facility infrastructure. An overview of these considerations are described below.
Cryogenic or Compressor-Based Cooling
Compressor-based cooling uses compressors and conventional refrigerants in a closed-loop system. Cryogenic cooling uses expendable Liquid Nitrogen (LN2) or Liquid Carbon Dioxide (LCO2) in an open loop system. Figure 1 shows a cooling curve for each method and figure 2 shows their respective temperature ranges.
Fig. 1: Cryo vs. compressor based performance. Cooling curves for LN2, LCO2 and various sized compressor units.
Fig. 2: Temperature range of various cooling methods.
Specifications that require extreme temperature ranges, fast transitions, or management of high heat dissipation will need the cooling power of cryogenics. A facility equipped to handle bulk delivery and use of liquefied gases is another reason to employ a system with cryogenic cooling. Compressor-based systems are better suited for long dwell times because a closed-loop system does not require expendable coolants. It also becomes a consideration when there are no facilities for handling cryogenic materials. For more on this subject, refer to Three Factors that Determine a Cooling Method for Electronics Testing.
Heat Transfer Capacity Determines Thermal Response Time
Industrial measurement devices, particularly in aerospace, automotive, and oil & gas, continue to extend their operating range to +150°C, +200°C, and above. As specified temperature ranges increase, so does the need for greater test system performance. Otherwise, the time to transition between hot and cold set points that are further apart will increase significantly. This is especially true when going cold. The performance relationship between device size and chamber volume is shown in figure 3.
Fig. 3: Performance relationship between device size and chamber volume. Using the same UUT under the same load, a properly sized environment reached -10°C in 50s, or three to nine times faster than the larger chambers.
Two main variables determine system throughput:
- The temperature difference between system capacity and the specified set point (dT)
- Chamber air flow properties (Hc). When you add the surface area (A) of your UUT, heat transfer (Q) can be expressed by the equation Q=Hc*A*dT.
For example, a system capable of reaching -55°C with no load used to cool a DUT to a set point of -40°C takes 200s. Introduce a different system with a thermal capacity of -90°C (same chamber volume and DUT) and the set point is reached in 100s, cutting ramp time by one half. Figure 4 depicts the difference in response between two systems of different capacity.
Fig. 4: Heat transfer capacity determines environment response time. Low-temperature cooling capability and flow rate will bring DUT to temperature faster.
Controlling the Test
Bringing the test environment to temperature quickly and accurately relies on the design of the controls, as well as the cooling capacity and enclosure specification. When fast transition times are important, software algorithms need to drive the environment past its set point. This is followed by equally rapid control to reach equilibrium at the desired temperature. Figure 5 illustrates this point. This is accomplished by programming the controls by one of two methods: either from an earlier characterization of the chamber or controlling the temperature based on monitoring the DUT.
Fig: 5: Environment control using temperature of UUT. Control algorithms drive chamber temperature past set point to speed UUT getting to temperature.
All environments experience thermal loss to ambient due to inefficiencies in the enclosure walls. That loss will increase as the enclosure temperature moves away from ambient.
What impact can this have? The larger the enclosure, the more circulation the DUT needs to maintain a uniform temperature. For example, if a chamber has insufficient circulation (undersized air flow or oversized space) or has not been characterized, the DUT can vary as much as +5°C at +175°C. An example of increasing heat loss is shown in figure 6.
Fig. 6: Example of heat loss due to insufficient air flow. In this case, the DUT never reaches its set points.
A properly sized chamber, one that is optimized for air flow, insulation, and control, will minimize thermal loss and improve uniformity. Positioning the DUT away from chamber walls can also improve uniformity.
Ensuring Test Accuracy
Regardless of chamber design, the test environment should provide accurate and consistent performance throughout its temperature range. While a controller can adjust quickly to maintain stable temperatures within a degree or less, there may be other influences that can affect accuracy.
The equipment itself can induce unintended variables to the test or calibration results. Stimulating pressure sensors, for example, needs an environment where the pressure inside the chamber is stable. Vaporization of a cryo gas can induce pressure, skewing sensor measurements. In another application, testing accelerometers requires a low-vibration test environment. Blowers used to create air flow can add unwanted vibration.
There are techniques to reduce equipment noise to negligible levels. Equipment-generated influences on sensor measurements should be a consideration when specifying a thermal test system.
Many types of industrial sensors and transmitters require temperature compensation. Various types are listed in Figure 7.
Fig. 7: Example of devices using volumetric flow measurement that need temperature compensation.
Manufacturing these devices typically requires a range of precision thermal environments to accommodate the product life cycle: conditioning, verification, calibration, life testing, design qualification, failure analysis, quality, and regulatory audits.
Sensor and electronics development are allowing devices to operate at wider temperature extremes. This presents challenges to achieving production throughput goals. While many controlled environments can support testing to design requirements, they may be slow to perform.
Failing to consider every aspect of configuring a thermal test system can add time to production output. These include user access, floor space, cooling capacity, control functions, enclosure sizing and thermal properties, system-induced noise, and placement of DUTs.
Increasing application requirements, i.e., automotive, chemical, oil, and gas, etc., are all the more reason to seek the proper balance between the device, test specification, thermal system, and facility infrastructure. They are essential to efficient production of sensor and transmitter devices.