Monthly Archives: December 2021

Double-Sided Cooling for MOSFETs

Emission regulations for the automotive industry are increasingly tightening. To meet these demands, the industry is moving rapidly towards the electrification of vehicles. Primarily, they are making use of batteries and electric motors for the purpose. However, they also must use power electronics for controlling the performance of hybrid and electric vehicles.

In this context, European companies are leading the way with their innovative technologies. This is especially so in the development of power components and modules, and specifically in the compound semiconductor materials field.

ICs used for handling electrical power are now increasingly using gallium nitride (GaN) and silicon carbide (SiC). Most of these devices are wide-bandwidth devices, and work at high temperatures and voltages, but with the high efficiency that is typically demanded of them in automotive applications.

Silicon Carbide is particularly appealing to the automotive industry because of its physical properties. While silicon can withstand an electrical field of 0.3 MV/cm before it breaks down, SiC can withstand 2.8 MV/cm. Additionally, SiC offers an internal resistance 100 times lower than that of silicon. These parameters imply that a smaller chip of SiC can handle the same level of current while operating at a higher voltage level. This allows smaller systems if made of SiC.

Apart from functioning more efficiently at elevated temperatures, a full SiC MOSFET module can reduce switching losses by 64%, when operating at a chip temperature of 125 °C. Power control units for controlling traction motors in hybrid electric vehicles must operate from engine compartments, and this places additional thermal loads on them.

Manufacturers are now exploring various solutions for improving the efficiency, durability, and reliability of SiC MOSFETs under the above operating conditions. One of these is to reduce the amount of wire bonding by using double-sided cooling structures. This cools the power semiconductor chips more effectively. Therefore, overmolded modules with double side cooling are rapidly becoming more popular, especially for mid-power and low-cost applications.

As a result of the research at the North Carolina State University, researchers have developed a prototype inverter using SiC MOSFETs that can transfer 99% of the input energy to the motor. This is about 2% higher than silicon-based inverters under regular conditions.

While an electric vehicle could achieve only 4.1 kW/L in the year 2010, new SiC-based inverters can deliver about 12.1 kW/L of power. This is very close to the goal of 13.4 kW/L that the US Department of Energy has set for inverters to be achieved by 2020.

With the new power component using double-sided cooling, it is capable of dissipating more heat effectively in comparison to earlier versions. These double-sided air-cooled inverters can operate up to 35 kW, easily eliminating the need for heavy and bulky liquid cooling systems.

The power modules use FREEDM Power Chip on Bus MOSFET devices to reduce parasitic inductance. The integrated power interconnect structure helps achieve this. With the power chips attached directly to the busbar, their thermal performance improves further. Air, as dielectric fluid, provides the necessary electrical isolation, while the busbar also doubles as an integrated heatsink. Thermal resistance for the power module can reach about 0.5 °C/w.

Smart Batteries with Sensors

Quick-charging batteries are in vogue now. Consumers are demanding more compact, quick-charging, lightweight, and high-energy-density batteries for all types of electronic devices including high-efficiency vehicles. Whatever be the working conditions, even during a catastrophe, batteries must be safe. Of late, the Lithium-ion battery technology has gained traction among designers and engineers as it satisfies several demands of consumers, while at the same time being cost-efficient. However, with designers pushing the limits of Li-ion battery technology capabilities, several of these requirements are now conflicting with one another.

While charging and discharging a Li-ion battery, many changes take place in it, like in the mechanics of its internal components, in its electrochemistry, and its internal temperature. The dynamics of these changes also affect the pressure in its interface within the housing of the battery. Over time, these changes affect the performance of the battery, and in extreme cases, can lead to reactions that are potentially dangerous.

Battery designers are now moving towards smart batteries with built-in sensors. They are using piezoresistive force and pressure sensors for analyzing the effects charging and discharging have on the batteries in the long run. They are also embedding these sensors within the battery housing to help alert users to potential battery failures. Designers are using thin, flexible, piezoresistive sensors for capturing relative changes in pressure and force.

Piezoresistive sensors are made of semi-conductive material sandwiched between two thin, flexible polyester films. These are passive elements acting as force-sensitive resistors within an electrical circuit. With no force or pressure applied, the sensors show a high resistance, which drops when the sensor has a load. With respect to conductance, the response to a force is a linear one as long as the force is within the range of the sensor’s capabilities. Designers arrange a network of sensors in the form of a matrix.

When two surfaces press on the matrix sensor, it sends analog signals to the electronics, which converts it into a digital signal. The software displays this signal in real-time to offer the activity occurring across the sensing area. The user can thereby track the force, locate the region undergoing peak pressure, and identify the exact moment of pressure changes.

The matrix sensors offer several advantages. These include about 2000-16000 sensing nodes, element spacing as low as 0.64 mm, capable of measuring pressure up to 25,000 psi, temperature up to 200 °C, and scanning speeds up to 20 kHz.

Designers also use single-point piezoresistive force sensors for measuring force within a single sensing area. They integrate such sensors with the battery as they are thin and flexible, and they can also function as a feedback system for an operational amplifier circuit in the form of a voltage divider. Depending on the circuit design, the user can adjust the force range of the sensor by changing its drive voltage and the resistance of the feedback. This allows the user complete control over measuring parameters like maximum force range, and the measurement resolution within the range. As piezoresistive force sensors are passive devices with linear response, they do not require complicated electronics and work with minimum filtering.

Using RTDs for Measuring Temperature

Much industrial automation, medical equipment, instrumentation, and other applications require temperature measurement for monitoring environmental conditions, correcting system drift, or achieving high precision and accuracy. Many temperature sensors are available for use like electronic bandgap sensors, thermistors, thermocouples, and resistance temperature detectors or RTDs.

The selection of the temperature sensor depends on the temperature range to be measured and the accuracy desired. The design of the thermometer also depends on these factors. For instance, RTDs provide an excellent means of measuring the temperature when the range is within -200 °C to +850 °C. RTDs also have very good stability and high accuracy of measurement.

The electronics associated with using RTDs as temperature sensors with high accuracy and good stability must meet certain criteria. As an RTD is a passive device, it does not produce any electrical signal output on its own. The electronics must provide the RTD with an excitation current for measuring its resistance. This requires a small but steady electrical current passing through the sensor for generating a voltage across it.

The design of the electronics also depends on whether the design is using a 2-, 3-, or 4-wire sensor. This decision affects the sensitivity and accuracy of the measurement. Furthermore, as the variation of resistance of the RTD with temperature is not linear, the electronics must condition the RTD signal and linearize it.

RTDs in common use are mostly made of platinum, and their commercial names are PT100 and PT1000. These are available in 2-wire, 3-wire, and 4-wire configurations. Platinum RTDs are available in two shapes—wire wound and thin-film. Other RTD types available are made from copper and nickel.

When using an RTD as a temperature sensor, its resistance varies as a function of the temperature, and not in a linear manner. However, the variation is very precise. To linearize the output of the RTD, the electronics must apply a standardizing curve, the most common standardizing curve for RTDs is the DIN curve. This curve defines the resistance versus temperature characteristics of the RTD sensor and its tolerance within the operating temperature range.

Using the standardizing curve helps define the accuracy of the sensor, starting with a base resistance at a specific temperature. Usually, this resistance is 100 ohms at 0 °C. DIN RTD standards have many tolerance classes, which are applicable to all types of platinum RTDs in low power applications.

The user must select the RTD and its accuracy for the specific application. The temperature range the RTD can cover depends on the element type. The manufacturer denotes its accuracy at calibration temperature, usually at 0 °C. Therefore, any temperature measured below or above the specified temperature range of the RTD will have lower accuracy and a wider tolerance.

The categorization of RTDs depends on their nominal resistance at 0 °C. Therefore, a PT100 sensor at 0 °C has a resistance of 100 ohms, while at the same temperature a PT1000 sensor has a resistance of 1000 ohms. Likewise, the temperature coefficient at 0 °C for a PT100 sensor is 0.385 ohms/°C, while that for the PT1000 is ten times higher at the same temperature

Advanced Materials for Magnetic Silence

High-performing advanced magnetic materials are now available that help to handle challenges in hybrid/electrical vehicles. These are challenges related to conducted and radiated electromagnetic interference. Automotive engineers are encountering newer challenges with fully electric vehicles or EVs and hybrid electric vehicles or HEVs become more popular.

The above challenges are so intriguing, engineers now have a fundamental discipline for it, noise vibration and harshness or NVH engineering. Their aim is to minimize NVH for ensuring not only the stability of the vehicle but also the comfort of the passengers.

With electric vehicles becoming quieter, several NVH sources that the noise of the internal combustion engine would mask, are now easily discernible. Engineers divide the root cause of the NVH problems in electric vehicles into vibration, aerodynamic noise, mechanical noise, and electromagnetic noise.

For instance, cabin comfort is adversely affected by electromagnetic noise from auxiliary systems such as the power-steering motor and the air-conditioning system. This can also interfere with the functioning of other subsystems.

Likewise, there is electromagnetic interference from the high-power traction drive system. This interference produces harmonics of the inverter switching and power supply frequencies. Moreover, the interference also induces electromagnetic noise within the motor as well.

With the battery frequently charging and discharging when the EV is in operation, combined with various electromagnetic noises like radiated noise, common-mode noise, and differential noise move through the transmission lines.

All the above reduce the cabin comfort in the vehicle while interfering with systems that help manage the combustion engine in an HEV.

As with many engineering projects, NVH issues are also specific to particular platforms and depend on the design of several structural components, the location of subsystems related to one another, and the design of isolating bushes and mountings. Engineers must deal with most NVH issues related to EMI by applying best practices in electrical engineering for attenuating high-frequency conducted and radiated interference as they couple onto cables and reach various subsystems. Engineers use cable ferrites for preventing long wires from acting as pickups or radiating aerials. They also use inline common-mode chokes for attenuating EMI entering signal and power lines by conduction.

For automotive applications, such cable chokes and ferrites must meet exacting criteria.  Major constraints for these components are their weight and size. Common-mode chokes must provide noise suppression through excellent attenuation properties while using a small physical volume. Additionally, they need to suppress broadband noise up to high operating temperatures, while maintaining high electrical and mechanical stress resistance.

To help with manufacturing such as maintaining high levels of productivity, there are further requirements of robustness and easy handling on assembly lines. This ensures each unit reaches customers in perfect condition. New materials meet the above requirements while offering enhanced characteristics.

The new class of materials is Nanocrystalline cores that engineers classify as metals and they help with eliminating low-frequency electromagnetic noise. Cable ferrites and choke cores made of these materials are much smaller than those made from conventional materials like ceramic ferrites. They also deliver superior magnetic performance, presenting a viable solution for challenging automotive and e-NVH issues.