Category Archives: Guides

What are Cold-Cathode Devices?

Some devices, like thermionic valves, contain a cathode that requires heating up before the device can work. However, other devices do not require a hot cathode to function. These devices have two electrodes within a sealed glass envelope that contains a low-pressure gas like neon. With a sufficiently high voltage applied to the electrodes, the gas ionizes, producing a glow around the negative electrode, also known as the cathode. Depending on the gas in the tube, the cathode glow can be orange (for neon), or another color. Since these devices do not require a hot cathode, they are known as cold-cathode devices. Based on this effect, scientists have developed a multitude of devices.

The simplest of cold-cathode devices is the neon lamp. Before the advent of LEDs, neon lamps were the go-to lights. Neon lamps ionize at around 90 V, which is the strike voltage or breakdown voltage of the neon gas within the lamp. Once ionized, the gas will continue to glow at a voltage of around 65 V, which is its maintain or sustain voltage. This difference between the strike voltage and the sustain voltage implies the gas has a negative resistance region in the operating curve of the device. Hence, users often build a relaxation oscillator with a neon lamp, a capacitor, and a resistor.

Another everyday use for the neon lamp is as a power indicator for the AC mains. In practice, as an AC power indicator, the neon lamp requires a series resistance of around 220k – 1M ohms to limit the current flow through it, which also extends its life significantly. Since the electrodes in a neon lamp are symmetrical, using it in an AC circuit causes both electrodes to glow equally.

Neon signs, such as those in Times Square and Piccadilly Circus, also use the same effect. Instead of a short tube like in the neon lamp, neon signs use a long tube shaped in the specific design of the application. Depending on the display color, the tube may contain neon or another gas, together with a small amount of mercury. By applying a fluorescent phosphor coating to the inside of the glass tube, it is possible to produce still more colors. Due to the significant separation between the two electrodes in neon signs, they require a high strike voltage of around 30kV.

Another application of cold-cathode devices is the popular Nixie tube. Although seven-segment LED displays have now largely replaced them, Nixie tubes are still popular due to their effect as a glorified neon tube. Typically, they have ten electrodes, each in the shape of a numeral. In use, the circuit switches to the electrode required for displaying a particular number. The Nixie tube produces very natural-looking displays, hence, people find them beautiful and preferable to the stick-like seven-segment LED displays.

Photographers still use flash tubes to illuminate the scenes they are capturing. They typically use them as camera flashes and strobes. Flash tubes use xenon gas as their filling. Apart from the two regular main electrodes, flash tubes have a smaller trigger electrode near one or both the main electrodes. In use, the main electrodes have a few hundred volts between them. For triggering, the circuit applies a high-voltage pulse to the trigger electrode. This causes the gas between the two electrodes to ionize rapidly, giving off a bright white flash.

What is Industrial Ethernet?

Earlier, we had a paradigm shift in the industry related to manufacturing. This was Industry 3.0, and, based on information technology, it boosted automation, enhanced productivity, improved precision, and allowed higher flexibility. Today, we are at the foothills of Industry 4.0, with ML or machine language, M2M or machine-to-machine communication, and smart technology like AI or artificial intelligence. There is a major difference between the two. While Industry 3.0 offered information to humans, allowing them to make better decisions, Industry 4.0 offers digital information to optimize processes, mostly without human intervention.

With Industry 4.0, it is possible to link the design office directly to the manufacturing floor. For instance, using M2M communications, CAD, or computer aided design can communicate directly to machine tools, thereby programming them to make the necessary parts. Similarly, machine tools can also provide feedback to CAD, sending information about challenges in the production process, such that CAD can modify them suitably for easier fabrication.

Manufacturers use the Industrial Internet or IIoT, the Industrial Internet of Things, to build their Industry 4.0 solutions. The network has an important role like forming feedback loops. This allows sensors to monitor processes in real-time, and the data thus collected can effectively control and enhance the operation of the machine.

However, it is not simple to implement IIoT. One of the biggest challenges is the cost of investment. But this investment can be justified through better design and manufacturing processes leading to cost savings through increased productivity and fewer product failures. In fact, reducing capital outflows is one way to accelerate adoption of Industry 4.0. Another way could be to use a relatively inexpensive but proven and accessible communication technology, like the Ethernet.

Ethernet is one of the wired networking options that is in wide use all over the world. It has good IP interoperability and huge vendor support. Moreover, POE or power over internet uses the same set of cables for carrying data as well as power to connected cameras, actuators, and sensors.

Industrial Ethernet, using rugged cables and connectors, builds on the consumer version of the Ethernet, thereby bringing a mature and proven technology to industrial automation. With the implementation of Industrial Ethernet, it is possible to not only transport vital information or data, but also remotely supervise machines, controllers, and PLCs on the shop floor.

Standard Ethernet protocol has high latency, mainly due to its tendency to lose packets. This makes it unsuitable for rapidly moving assembly lines that must run in synchronization. On the other hand, Industrial Ethernet hardware uses deterministic and low-latency industrial protocols, like PROFINET, Modbus TCP, and Ethernet/IP.

For Industrial Ethernet deployment, the industry uses hardened versions of the CAT 5e cable. For instance, the Gigabit Ethernet uses CAT 6 cable. For instance, the CAT 5e cable has eight wires formed into four twisted pairs. This twisting limits cross talk and signal interference, and each pair supports a duplex connection. Gigabit Ethernet, being a high-speed system, uses all four pairs for carrying data. For lower throughput, systems can use two twisted pairs, and the other two for carrying power or for conventional phone service.

E-Fuse Future Power Protection

High-voltage eMobility applications are on the rise. Traditionally, fuses are non-re-settable, and sometimes mechanical relays or contactors are used. However, that is now changing. Semiconductor-based re-settable fuses or eFuses are now replacing traditional fuses.

These innovative eFuses represent a significant trend in safeguarding hardware and users in high-voltage and high-power scenarios. Vishay has announced a reference design for an eFuse that can handle high power loads. They have equipped the new eFuse with SIC MOSFETs and a VOA300 optocoupler. The combination can handle up to 40 kW of continuous power load. The design is capable of operating at full power with minimal losses of lower than 30 W without active cooling. The eFuse incorporates important essential features like continuous current monitoring, a preload function, and rapid overcurrent protection.

Vishay has designed the eFuse to manage the safe connection and disconnection of a high-voltage power source. For instance, the eFuse can safely connect or disconnect various vehicle loads safely to and from a high-energy battery pack. The eFuse uses SIC MOSFETS as its primary switches, and these are capable of continuous operation up to 100 Amperes. The user can predefine a current limit. When the current exceeds this limit, the eFuse disconnects the load rapidly from the power source, safeguarding the user and the power source or battery pack. In addition, the presence of a short circuit or an excessive load capacitance during power-up causes the eFuse to initiate an immediate shutdown.

The basic design of the eFuse is in the form of a four-layer, double-sided PCB or printed circuit board of 150 mm x 90 mm. Each layer has thick copper of 70 µm thickness, as against 35 µm for regular PCBs. The board has some connectors extending beyond its edges. The top side of the PCB has all the high-voltage circuitry, control buttons, status LEDs, multiple test points, and connectors. The PCB’s bottom side has the low-voltage control circuitry. It is also possible to control the eFuse remotely via a web browser.

To ensure safety, the user must enable the low-voltage power supply in the first place. They can follow this up by enabling the high-voltage power supply on the input. For input voltages exceeding 50 V, an LED indicator lights up on the board. Vishay has added two sets of six SIC MOSFETS with three connected in parallel in a back-to-back configuration. This ensures the eFuse can handle current flow in both directions. A current-sensing shunt resistor, Vishay WSLP3921, monitors the current flowing to the load. Vishay has positioned the current sensing shunt resistor strategically between the two parallel sets of MOSFETs.

Vishay has incorporated convenient control options in the eFuse. Users can operate the control options via the push buttons on the PCB, or by using the external controller, Vishay MessWeb. Either way unlocks access to an expanded array of features. Alternately, the user can integrate the eFuse seamlessly into a CAN bus-based system. They can do this by using an additional chipset in conjunction with the MessWEB controller. Vishay claims to have successfully tested its reference eFuse design.

What is DFMEA?

If you are just entering the world of design, you will have to face a session of DFMEA some time or the other. DFMEA is an acronym for Design Failure Mode and Effects Analysis. In recent years, corporate settings are using DFMEA, a subset of FMEA or failure mode and effects analysis, as a valuable tool. It helps engineers spot potential risks in product design before they make any significant investments.

Engineers are using DFMEA as a systematic tool for mapping the early-warning system of a product. They use it to make sure the product functions not only as they intend it to, but also to keep users happy. It is like taking a peek into the future, catching any design flaws before they cause any major damage. Simply put, DFMEA helps to check the overall design of products and components, figuring out anything that might go wrong, and the way to fix it. This tool is specifically useful in industries involved with manufacturing, where it is important to prevent failure.

To use DFMEA effectively, the designer must look for potential design failures, observing them from all angles. Here is how they do it.

They first look for a failure mode, which essentially means how the design could possibly fail. For instance, your computer might freeze up when you open too many programs, which is one mode or type of failure.

Then they look for why the failure mode should happen. This could be due to a design defect, or a defect in the quality, system, or application of the part.

Next, the designers look for an effect of the failure. That is, what happens when there is a failure. In our example, a frozen computer can lead to a frustrated user.

In the last stage, designers look for the severity of the failure. They estimate how bad the failure could be for safety, quality, and productivity. Designers typically look for the worst-case scenarios.

To put it in a nutshell, DFMEA helps engineers figure out not only potential issues, but also the consequences of the failures. This way, they can prevent failures from happening in the first place.

However, DFMEA is never a one-man show. Rather, it is a team effort. Typically, the team has about 4 to 6 members—those who are fully knowledgeable about the product—and led by a product design engineer. The team members could include engineers with material background, and those from product quality, testing, and analysis. There may be people from other departments, like logistics, service, and production.

DFMEA is an essential tool in any design process. However, it is a crucial tool in industries handling new products and technology. This includes industries such as software, healthcare, manufacturing, industrial, defense, aerospace, and automotive. DFMEA helps them locate potential failure modes, reducing risks involved with introducing new technologies and products.

The entire DFMEA exercise is a step-by-step process and the team must think through each step thoroughly before they movie on to the next. It is essential they look for and identify the failure, and find out its consequences, before finding out ways to prevent it from happening.

What is Voice UI?

Although we usually talk to other humans, our interactions with non-animated objects are almost always silent. That is, until the advent of the Voice User Interface or Voice UI or VUI technology. Now, Voice UI has broken this silent interaction between humans and machines. Today, we have virtual assistants and voice-controlled devices like Siri, Google Assistant, Hound, Alexa, and many more. Most people who own a voice-controlled device say it is like talking to another person.

So, what is Voice UI? The Voice UI technology has made it possible for humans to interact with a device or an application through voice commands. As we are increasingly using digital devices, screen fatigue is something we have all experienced often. This has led to the development of a voice user interface. The advantages are numerous—primarily, hands-free operation and control over the device or application without having to stare at a screen. Leading five companies of the world, Amazon, Google, Microsoft, Apple, and Facebook, have developed their respective voice-activated AI assistants and voice-controlled devices.

Whether it is a voice-enabled mobile app, an AI assistant, or a voice-controlled device like a smart speaker, voice interactions and interfaces have become incredibly common. For instance, according to a report, 25% of adults in the US own a smart speaker, and 33% of the US population use their voice for searching online.

How does this technology work? Well, under the hood, there are several Artificial Intelligence technologies at work, such as Automatic Speech Recognition, Name Entity Recognition, and Speech Synthesis. The VUI speech components and the backend infrastructure are backed by AI technologies and typically reside in a public or private cloud. It is here that the VUI processes the speech and voice of the user. After deciphering and translating the user’s intent, the AI technology returns a response to the device.

The above is the basics of the Voice UI technology, albeit in a nutshell. For a better user experience, most companies also include additional sound effects and a graphical user interface. The sound effects and visuals assist the user in knowing whether the device is listening to them, or processing before responding and responding.

Today Voice UI technology is widespread, and it is available in many day-to-day devices like Smartphones, Desktop Computers, Laptops, Wearables, Smartwatches, Smart TVs, Sound Systems, Smart Speakers, and the Internet of Things. However, everything has advantages and disadvantages.

First, the advantages. VUI is faster than having to type the commands in text, and more convenient. Not many are comfortable typing commands, but almost all can use their voice to request a task from the VUI device. Voice commands, being hands-free, are useful while cooking or driving. Moreover, you do not need to face or look at the device to send voice commands.

Next, the disadvantages. There are privacy concerns, as a neighboring person can overhear your commands. AI technology is still in its infancy, and is prone to misinterpretation or being inaccurate, especially when differentiating homonyms like ‘their’ and ‘there’. Moreover, voice assistants may find it difficult to decipher commands in noisy public places.

What is UWB Technology?

UWB is the acronym for Ultra-Wideband, a 132-year-old communications technology. Engineers are revitalizing this old technology for connecting wireless devices over short distances. Although more modern technologies like Bluetooth are available for the purpose, industry observers are of the opinion that UWB can prove to be more versatile and successful than Bluetooth is. According to them, UWB has superior speed, uses less power, is more secure, provides superior device ranging and location discovery, and is cheaper than Bluetooth is.

Therefore, companies are researching and investing in UWB technology. This includes names like Xtreme Spectrum, Bosch, Sony, NXP, Xiaomi, Samsung, Huawei, Apple, Time Domain, and Intel. As such, Apple is already using UWB chips in their iPhone 11. This is allowing Apple obtain superior positioning accuracy and ranging, as it uses time of flight measurements.

Marconi’s first man-made radio using spark-gap transmitters used UWB for wireless communication. The government banned UWB signals for commercial use in 1920. However, since 1992, the scientific community started paying greater attention to the UWB technology.

UWB or Ultra-Wideband technology offers a protocol for short-range wireless communications, similar to what Wi-Fi or Bluetooth offer. It uses short pulse radio waves over a spectrum of frequencies that range from 3.1 to 10.5 GHz and does not require licensing for its applications.

In UWB, the bandwidth of the signal is equal to or larger than 500 MHz or is fractionally greater than 20% of the fractional bandwidth around the center frequency. Compared to conventional narrowband systems, the very wide bandwidth of UWB signals leads to superior performance indoors. This is because the wide bandwidth offers significantly greater immunity from channel effects when used in dense environments. It also allows very fine time-space resolutions resulting in highly accurate indoor positioning of the UWB devices.

As its spectral density is low, often below environmental noise, UWB ensures the security of communications with a low probability of signal detection. UWB allows transmission at high data rates over short distances. Moreover, UWB systems can comfortably co-exist with other narrowband systems already under deployment. UWB systems allow two different approaches for data transmission.

The first approach uses ultra-short pulses—often called Impulse radio transmission—in the picosecond range, covering all frequencies simultaneously. The second approach uses the OFDM or orthogonal frequency division multiplexing for subdividing the entire UWB bandwidth to a set of broadband channels.

While the first approach is cost-effective, there is a degradation of the signal-to-noise ratio. Impulse radio transmission does not involve a carrier; therefore, it uses a simpler transceiver architecture as compared to traditional narrowband transceivers. For instance, the UWB antenna radiates the signal directly. An example of an easy to generate UWB pulse is using a Gaussian monocycle or one of its derivatives.

The second approach offers better performance as it significantly uses the spectrum more effectively. Although the complexity is higher as the system requires more signal processing, it substantially improves the data throughput. However, the higher performance comes at the expense of higher power consumption. The application defines the choice between the two approaches.

Are Lithium Iron Phosphate Batteries Better?

According to the latest news from developments in batteries, the LFP or Lithium Iron Phosphate battery technology is going to pose a serious challenge to that of the omnipresent Lithium-ion type.

As far as e-mobility is concerned, Lithium-ion batteries have some serious disadvantages. These include higher cost and lower safety as compared to other chemistries. On the other hand, recent advancements in battery pack technology have led to an enhancement in the energy density of LFP batteries so that they are now viable for all kinds of applications related to e-mobility—not only in vehicles but also in shipping, such as in battery tankers.

In their early years of development, LFP cells had a lower energy density as compared to those of Lithium-ion cells. Improved packaging technology had bumped up the energy density to about 160 Wh/kg, but this was still not enough for use in e-mobility applications.

With further improvements in technology, LFP batteries now operate better at low temperatures, charge faster, and have a longer cycle life. These features are making them more appealing for many applications, including their use in electric cars and in battery tankers.

However, LFP batteries still continue to face several challenges, especially in applications involving high power. This is mainly due to the unique crystal structure of LFP, which reduces its electronic conductivity. Scientists have been experimenting with different approaches, such as reducing the directional crystal growth or particle size, using different conductive layer coatings, and element doping. These have not only helped to improve the electronic conductivity but have increased the thermal stability of the batteries as well.

Comparing LFP batteries with the Lithium-ion types shows them to have individual advantages in different key characteristics. For instance, Lithium-ion batteries offer higher cell voltages, higher power density, and better specific capacity. These characteristics lead to Lithium-ion batteries offering higher volumetric energy density suitable for achieving longer driving ranges.

In contrast, LFP batteries offer a longer cycle life, better safety, and better rate capability. As the risk of thermal runaway, in case of mechanical damage to a cell, is also much lower, these batteries are now popularly used for commercial vehicles with frequent access to charging, such as scooters, forklifts, and buses.

It is also possible to fully charge LFP batteries in each cycle, in contrast to having to stop at 80% to avoid overcharging some type of Lithium-ion batteries. Although this does allow simplification of the battery management algorithm, it adds other complexities for Battery Management Systems managing LFP cells.

Another key advantage of LFP batteries is they do not require the use of cobalt and nickel in their anodes. The industry fears that in the coming years, sourcing these metals will be more difficult. Even with mining projections of both elements doubling by 2030, they may not meet the increase in demand.

All the above is making the LFP batteries look increasingly interesting for e-mobility applications, with more car manufacturers planning to adapt them in their future cars.

What are Flash SSDs?

Earlier we used traditional hard disk drives in our computers. These were mechanically spinning magnetic disks with read-write heads. Nowadays, we use SSD or Solid-State Drives that have no moving parts. SSDs can retain data once it is saved without power, as they use NAND flash memory. To increase the data density, the NAND chips are multilayered. That means they can hold upwards of one bit of information per cell. SSDs using multilayered chips are named single-, multi-, triple-, quad-, and penta-level SSDs, according to the number of bits each cell can hold.

Multilayered SSDs have their own advantages and shortcomings and can range from speed to price to reliability. For instance, SLC or Single Level Cells have a lifespan measured in program/erase cycles of about 50,000 to 100,000 and can withstand high-intensity write operations.

MLCs or multi-level cells with two bits per cell can expect a lifespan of about 10,000 cycles and are mostly suitable for enterprise data centers.

TLCs or triple-level cells with three bits per cell can expect a lifespan of about 3,000 cycles and are useful for digital consumer products.

QLCs or quad-level cells with four bits per cell can expect a lifespan of about 2,000 cycles and are suitable for read-heavy operations, streaming media, and content delivery applications.

No data is available for the lifespan of PLCs or penta-level cells with five bits per cell. These SSDs are suitable for long-term storage of data such as in data archives.

Flash SSDs have revolutionized the storage of enterprise data in all its forms. They have enabled faster boot times and the application starts on PCs and mobile devices. They have facilitated the blistering performance of storage arrays in workloads like business analytics. In most performance metrics, flash SSDs have far outshone the older hard disk drives.

Speed aside, flash SSDs offer additional benefits. They are far more durable while being less susceptible to damage from abrupt physical shocks and movements, as compared to the traditional HDDs. Additionally, they use much less power to operate. Even though they cost more than the HDDs per gigabyte, the improved performance of SSDs overcomes their higher expense for most applications.

Flash SSDs store data in their memory cells using a technique called FGT or Floating Gate Metal Oxide Field Effect Transistors that can store a binary 0 or 1. With two gates, each FGT behaves like an electrical switch with current flowing between two points. NAND flash is named so as it uses NOT-AND logic gates. Power is not necessary to retain data in the flash cells, as, in the absence of power, the FGT provides the electrical charge for maintaining the data intact in the memory cells.

Flash SSDs are solid state, meaning they have no mechanical part to wear out. However, SSDs can nonetheless fail. One measure of SSDs is the lifespan or the number of program/erase cycles that a drive can complete before its degradation and failure. This is overcome by using wear-leveling technology, whereby the life of the SSD is prolonged by evenly distributing the program/erase cycle across the total NAND cells in the drive.

New Circuit Protection Technologies

A wide variety of vehicle models is entering the EV market these days. The demand is for decreased charging times and increased range. This is heightening not only the challenges towards electrical system performance but also towards better circuit protection.

For instance, decreasing the charging times requires systems using higher voltages and higher currents. This has necessitated the shift from the 400 V system to the 800 V, bringing with it major challenges to the design of circuit protection, especially on the battery side. That is because manufacturers must now consider increased fault currents that the protection components must handle.

With motor currents and power ramping up, circuit protection and switching devices also face higher stresses. They now need to withstand not only the higher operating currents but also the higher cycling requirements. Increased range means higher fault currents.

Therefore, circuit protection requirements are moving in several directions simultaneously. SiC MOSFET switches, acting as solid-state resettable transistor switches, address the high-voltage, low-current subsystems.

The power distribution box in the vehicle is still using the conventional system architecture of a coordinated fuse and contactor. Coordination between the two is necessary to ensure they cover the full range of possible faults from a range of underlying causes including different states of charge of the battery.

Another circuit protection technique is the pyrotechnic approach. This comes into play in events of a catastrophic nature, such as in crashes, when it is necessary to physically cut the busbar. These systems are mostly triggered by circuits that deploy the airbag and work to quickly isolate the battery from the rest of the vehicle. This helps to protect the driver, the passengers, and the first responders from fire and explosion from short circuits through the body of the vehicle.

The above are leading to the development of newer types of protection, such as with breaktors, fully coordinating circuit protection, and switching. Its design allows the breaktor to trigger passively or it can actively interrupt in case of power loss, thereby improving the functional safety of critical protection systems. Moreover, it has the ability to reset itself.

Another is an automotive precision bidirectional eFuse, which is increasingly becoming a common device in a vehicle. Traditional automotive fuses can be low in accuracy and slow to react. This can be a safety issue, as the safety of the system is indirectly proportional to the response time of a fuse. An eFuse not only has high accuracy but also a low response time, which increases the safety of the system.

However, there is a durability issue related to fuses and contactors that vehicle manufacturers use. The solution for this is the pyrotechnical switch. This is a protection device based on a trigger-able circuit similar to the functioning of an airbag. It produces a controlled explosion to sever a conducting busbar. Pyrotechnical switches, while solving the challenge of coordination, must rely on accurate triggering rather than on the passive reaction of fuses. Additional components are necessary to ensure a reliable triggering.

All the above protection systems require a trade-off between speed and durability. While a big fuse can be slow to operate, a smaller one may be faster but may suffer from a fatigue risk.

CP Coolers for Storing Reagents

In analytical chemistry, various reagents are necessary to detect the presence or absence of a substance, or for checking the occurrence of a specific reaction. For identifying or measuring a target substance, medical and laboratory technicians need to use reagents that cause a biological or chemical reaction to occur. For instance, biotechnologists consider oligomers, model organisms, antibodies, and specific cell lines as reagents for identifying and manipulating cell matter. Such reagents, especially those that biotechnologists use, have narrow operating temperature windows and therefore, require freezing or refrigeration.

If kept at room temperature, these temperature-sensitive reagents may degrade, becoming contaminated by microbial growth, thereby affecting their testing integrity. Most of these reagents will degrade and deteriorate within hours if stored without proper and precise refrigeration. Moreover, some reagents will be negatively affected if the storage temperature is tool low, or if they are subjected to multiple thaw-freeze cycles. Precise monitoring and stabilization of temperature below ambient is critical for extending the life of reagents, ensuring the accuracy and reliability of medical and laboratory tests, and keeping replacement costs down.

Manufacturers are using thermoelectric-based cooling solutions for precise temperature control. These are solid-state heat-pump devices, moving heat via the thermoelectric effect. In operation, direct current flowing through the cooler creates a temperature differential across the module. This allows one side of the thermoelectric cooler to get cold, suitable for heat absorption, while the other side heats up, making it possible to dissipate heat.

In actual operation, manufacturers typically connect thermoelectric coolers to forced convection heat sinks on the hot side to help dissipate the heat to the ambient. The action is reversible, such as by reversing the current flow, the thermoelectric cooler can be made to heat the cold side. Adequate control circuitry and the dual capability of the thermoelectric cooler enables capabilities of precise temperature control in the unit.

Compared to regular technologies like compressor-based systems, thermoelectric coolers such as the CP10-31-05 from Laird Thermal Systems Solutions deliver accurate temperature control in a more compact, stable, efficient, and reliable package. No refrigerants are necessary for the operation, making them friendly to the environment.

Featuring no moving parts and solid-state construction, the CP series of thermoelectric coolers operate extremely reliably, with no noise, and at low power. Their small footprint allows designers to increasingly integrate them into various instruments with easy flexibility and because of their solid-state operation, they can mount them in any orientation.

The Laird Thermal Systems Solutions has designed their CP series as compact and rugged thermoelectric cooling products. They operate at higher currents, making them suitable for large heat-pumping applications like storage systems for reagents. Designers mount the CP series of coolers near the storage chamber for accurately and closely regulating the temperature within the reagent chamber. The CP series of coolers offer a direct-to-air configuration, with a maximum cooling power of about 125 Watts and a temperature differential of 67 °C at ambient temperatures of 25 °C.

The CP series of thermoelectric coolers are available in a wide range of capacities, shapes, and power ranges for meeting the wide range of requirements suited to reagent cooling.