Author Archives: P J

Standard Connectors for EV Charging

With EVs or electric vehicles becoming a trend for both individuals and commercial operations, more people are opting for them for commuting to work, school, and moving around the town. While there are tax benefits to using EVs, they also reduce our dependence on fossil fuels. Moreover, with the maturing of battery technologies, EV performance is comparable to those of vehicles with traditional internal combustion engines.

With the increasing number of EVs in use, their fundamental and foremost requirement is charging the battery. This aspect has led to a spurt in the growth of electric vehicle charging stations. Manufacturers of electric vehicles produce a range of vehicles that they base on their specific design specifications. However, charging devices need a uniform design so that any make or model of an electric vehicle can hook up for charging. At present, there are two categories of electric vehicle chargers—Level 1 and Level 2.

Level 1 chargers are available with the vehicle. They have adapters that the user can plug into a standard mains 120-Volt outlet. Manufacturers make these chargers common for use in home charging outlets.

Level 2 chargers are standalone types and separate from electric vehicles. They have adapters to plug into a 240-Volt outlet. These chargers are typically available in offices, parking garages, grocery stations, and other such locations. Homeowners may also purchase Level 2 chargers separately.

To allow any model or make of EV to connect to any Level 2 chargers, it is necessary for both the EV and the charger to use a standard connector. At present, the standard charger connector for Level 2 chargers is the SAR J1772. All the latest electric vehicles using plug-in charging use the standard SAE J1772 plug, while the charger connectors use the standard SAE J1772 adapters. These are also known as J plugs. J1772_201710 is the most current revision for the J plug specifications.

While SAE was originally an acronym for the Society of Automobile Engineers, presently they are known as SAE International. They often come up with recommended practices that the entire automobile industry accepts as standards. With the use of the standard SAE J1772 plugs, a customer purchasing an electric vehicle from any manufacturer can charge it using the same charging connector. Public electric charging stations also use the SAE J1772 chargers, and these are compatible with plugs in most vehicles from different manufacturers.

Each SAE J1772 charger has a standard coupler control system consisting of AC and DC residual current detectors, an off-board AC to DC high power stage, an auxiliary power stage, an isolation monitor unit, a two-way communication system over a single wire, contactors, relays, service and user interface, and an energy metering unit. Charging stations with J1772 connectors use a cable for charging the electric vehicle, and the rating of this cable is EVJE for 300 Volts or EVE for 600 Volts.

The EVJE/EVE cable consists of a thermoplastic elastomer jacket and insulation around a center conductor made of copper. The cable usually has two conductors of 18 AWG wire, one conductor of 10 AWG, and another conductor of 16 AWG.

Double-Sided Cooling for MOSFETs

Emission regulations for the automotive industry are increasingly tightening. To meet these demands, the industry is moving rapidly towards the electrification of vehicles. Primarily, they are making use of batteries and electric motors for the purpose. However, they also must use power electronics for controlling the performance of hybrid and electric vehicles.

In this context, European companies are leading the way with their innovative technologies. This is especially so in the development of power components and modules, and specifically in the compound semiconductor materials field.

ICs used for handling electrical power are now increasingly using gallium nitride (GaN) and silicon carbide (SiC). Most of these devices are wide-bandwidth devices, and work at high temperatures and voltages, but with the high efficiency that is typically demanded of them in automotive applications.

Silicon Carbide is particularly appealing to the automotive industry because of its physical properties. While silicon can withstand an electrical field of 0.3 MV/cm before it breaks down, SiC can withstand 2.8 MV/cm. Additionally, SiC offers an internal resistance 100 times lower than that of silicon. These parameters imply that a smaller chip of SiC can handle the same level of current while operating at a higher voltage level. This allows smaller systems if made of SiC.

Apart from functioning more efficiently at elevated temperatures, a full SiC MOSFET module can reduce switching losses by 64%, when operating at a chip temperature of 125 °C. Power control units for controlling traction motors in hybrid electric vehicles must operate from engine compartments, and this places additional thermal loads on them.

Manufacturers are now exploring various solutions for improving the efficiency, durability, and reliability of SiC MOSFETs under the above operating conditions. One of these is to reduce the amount of wire bonding by using double-sided cooling structures. This cools the power semiconductor chips more effectively. Therefore, overmolded modules with double side cooling are rapidly becoming more popular, especially for mid-power and low-cost applications.

As a result of the research at the North Carolina State University, researchers have developed a prototype inverter using SiC MOSFETs that can transfer 99% of the input energy to the motor. This is about 2% higher than silicon-based inverters under regular conditions.

While an electric vehicle could achieve only 4.1 kW/L in the year 2010, new SiC-based inverters can deliver about 12.1 kW/L of power. This is very close to the goal of 13.4 kW/L that the US Department of Energy has set for inverters to be achieved by 2020.

With the new power component using double-sided cooling, it is capable of dissipating more heat effectively in comparison to earlier versions. These double-sided air-cooled inverters can operate up to 35 kW, easily eliminating the need for heavy and bulky liquid cooling systems.

The power modules use FREEDM Power Chip on Bus MOSFET devices to reduce parasitic inductance. The integrated power interconnect structure helps achieve this. With the power chips attached directly to the busbar, their thermal performance improves further. Air, as dielectric fluid, provides the necessary electrical isolation, while the busbar also doubles as an integrated heatsink. Thermal resistance for the power module can reach about 0.5 °C/w.

Smart Batteries with Sensors

Quick-charging batteries are in vogue now. Consumers are demanding more compact, quick-charging, lightweight, and high-energy-density batteries for all types of electronic devices including high-efficiency vehicles. Whatever be the working conditions, even during a catastrophe, batteries must be safe. Of late, the Lithium-ion battery technology has gained traction among designers and engineers as it satisfies several demands of consumers, while at the same time being cost-efficient. However, with designers pushing the limits of Li-ion battery technology capabilities, several of these requirements are now conflicting with one another.

While charging and discharging a Li-ion battery, many changes take place in it, like in the mechanics of its internal components, in its electrochemistry, and its internal temperature. The dynamics of these changes also affect the pressure in its interface within the housing of the battery. Over time, these changes affect the performance of the battery, and in extreme cases, can lead to reactions that are potentially dangerous.

Battery designers are now moving towards smart batteries with built-in sensors. They are using piezoresistive force and pressure sensors for analyzing the effects charging and discharging have on the batteries in the long run. They are also embedding these sensors within the battery housing to help alert users to potential battery failures. Designers are using thin, flexible, piezoresistive sensors for capturing relative changes in pressure and force.

Piezoresistive sensors are made of semi-conductive material sandwiched between two thin, flexible polyester films. These are passive elements acting as force-sensitive resistors within an electrical circuit. With no force or pressure applied, the sensors show a high resistance, which drops when the sensor has a load. With respect to conductance, the response to a force is a linear one as long as the force is within the range of the sensor’s capabilities. Designers arrange a network of sensors in the form of a matrix.

When two surfaces press on the matrix sensor, it sends analog signals to the electronics, which converts it into a digital signal. The software displays this signal in real-time to offer the activity occurring across the sensing area. The user can thereby track the force, locate the region undergoing peak pressure, and identify the exact moment of pressure changes.

The matrix sensors offer several advantages. These include about 2000-16000 sensing nodes, element spacing as low as 0.64 mm, capable of measuring pressure up to 25,000 psi, temperature up to 200 °C, and scanning speeds up to 20 kHz.

Designers also use single-point piezoresistive force sensors for measuring force within a single sensing area. They integrate such sensors with the battery as they are thin and flexible, and they can also function as a feedback system for an operational amplifier circuit in the form of a voltage divider. Depending on the circuit design, the user can adjust the force range of the sensor by changing its drive voltage and the resistance of the feedback. This allows the user complete control over measuring parameters like maximum force range, and the measurement resolution within the range. As piezoresistive force sensors are passive devices with linear response, they do not require complicated electronics and work with minimum filtering.

Using RTDs for Measuring Temperature

Much industrial automation, medical equipment, instrumentation, and other applications require temperature measurement for monitoring environmental conditions, correcting system drift, or achieving high precision and accuracy. Many temperature sensors are available for use like electronic bandgap sensors, thermistors, thermocouples, and resistance temperature detectors or RTDs.

The selection of the temperature sensor depends on the temperature range to be measured and the accuracy desired. The design of the thermometer also depends on these factors. For instance, RTDs provide an excellent means of measuring the temperature when the range is within -200 °C to +850 °C. RTDs also have very good stability and high accuracy of measurement.

The electronics associated with using RTDs as temperature sensors with high accuracy and good stability must meet certain criteria. As an RTD is a passive device, it does not produce any electrical signal output on its own. The electronics must provide the RTD with an excitation current for measuring its resistance. This requires a small but steady electrical current passing through the sensor for generating a voltage across it.

The design of the electronics also depends on whether the design is using a 2-, 3-, or 4-wire sensor. This decision affects the sensitivity and accuracy of the measurement. Furthermore, as the variation of resistance of the RTD with temperature is not linear, the electronics must condition the RTD signal and linearize it.

RTDs in common use are mostly made of platinum, and their commercial names are PT100 and PT1000. These are available in 2-wire, 3-wire, and 4-wire configurations. Platinum RTDs are available in two shapes—wire wound and thin-film. Other RTD types available are made from copper and nickel.

When using an RTD as a temperature sensor, its resistance varies as a function of the temperature, and not in a linear manner. However, the variation is very precise. To linearize the output of the RTD, the electronics must apply a standardizing curve, the most common standardizing curve for RTDs is the DIN curve. This curve defines the resistance versus temperature characteristics of the RTD sensor and its tolerance within the operating temperature range.

Using the standardizing curve helps define the accuracy of the sensor, starting with a base resistance at a specific temperature. Usually, this resistance is 100 ohms at 0 °C. DIN RTD standards have many tolerance classes, which are applicable to all types of platinum RTDs in low power applications.

The user must select the RTD and its accuracy for the specific application. The temperature range the RTD can cover depends on the element type. The manufacturer denotes its accuracy at calibration temperature, usually at 0 °C. Therefore, any temperature measured below or above the specified temperature range of the RTD will have lower accuracy and a wider tolerance.

The categorization of RTDs depends on their nominal resistance at 0 °C. Therefore, a PT100 sensor at 0 °C has a resistance of 100 ohms, while at the same temperature a PT1000 sensor has a resistance of 1000 ohms. Likewise, the temperature coefficient at 0 °C for a PT100 sensor is 0.385 ohms/°C, while that for the PT1000 is ten times higher at the same temperature

Advanced Materials for Magnetic Silence

High-performing advanced magnetic materials are now available that help to handle challenges in hybrid/electrical vehicles. These are challenges related to conducted and radiated electromagnetic interference. Automotive engineers are encountering newer challenges with fully electric vehicles or EVs and hybrid electric vehicles or HEVs become more popular.

The above challenges are so intriguing, engineers now have a fundamental discipline for it, noise vibration and harshness or NVH engineering. Their aim is to minimize NVH for ensuring not only the stability of the vehicle but also the comfort of the passengers.

With electric vehicles becoming quieter, several NVH sources that the noise of the internal combustion engine would mask, are now easily discernible. Engineers divide the root cause of the NVH problems in electric vehicles into vibration, aerodynamic noise, mechanical noise, and electromagnetic noise.

For instance, cabin comfort is adversely affected by electromagnetic noise from auxiliary systems such as the power-steering motor and the air-conditioning system. This can also interfere with the functioning of other subsystems.

Likewise, there is electromagnetic interference from the high-power traction drive system. This interference produces harmonics of the inverter switching and power supply frequencies. Moreover, the interference also induces electromagnetic noise within the motor as well.

With the battery frequently charging and discharging when the EV is in operation, combined with various electromagnetic noises like radiated noise, common-mode noise, and differential noise move through the transmission lines.

All the above reduce the cabin comfort in the vehicle while interfering with systems that help manage the combustion engine in an HEV.

As with many engineering projects, NVH issues are also specific to particular platforms and depend on the design of several structural components, the location of subsystems related to one another, and the design of isolating bushes and mountings. Engineers must deal with most NVH issues related to EMI by applying best practices in electrical engineering for attenuating high-frequency conducted and radiated interference as they couple onto cables and reach various subsystems. Engineers use cable ferrites for preventing long wires from acting as pickups or radiating aerials. They also use inline common-mode chokes for attenuating EMI entering signal and power lines by conduction.

For automotive applications, such cable chokes and ferrites must meet exacting criteria.  Major constraints for these components are their weight and size. Common-mode chokes must provide noise suppression through excellent attenuation properties while using a small physical volume. Additionally, they need to suppress broadband noise up to high operating temperatures, while maintaining high electrical and mechanical stress resistance.

To help with manufacturing such as maintaining high levels of productivity, there are further requirements of robustness and easy handling on assembly lines. This ensures each unit reaches customers in perfect condition. New materials meet the above requirements while offering enhanced characteristics.

The new class of materials is Nanocrystalline cores that engineers classify as metals and they help with eliminating low-frequency electromagnetic noise. Cable ferrites and choke cores made of these materials are much smaller than those made from conventional materials like ceramic ferrites. They also deliver superior magnetic performance, presenting a viable solution for challenging automotive and e-NVH issues.

New Battery Technology for UPS

Most people know of the Lithium-ion battery technology in use mainly due to their overwhelming presence in mobile sets. Those who use uninterruptible power supplies for backing up their systems are familiar with the lead-acid cells and the newer lithium-ion cells. Another alternative technology is also coming up mainly for mission-critical facilities such as for data centers. This is the Nickel-Zinc technology, and it has better trade-offs to offer.

But the Nickel-Zinc battery technology is not new. In fact, Thomas Edison had patented it about 120 years ago. In its current avatar, the Nickel-Zinc battery offers superior performance when used in UPS backup systems. They offer better power density, are more reliable, safe, and are highly sustainable.

For instance, higher power density translates into smaller weight and size. This is the major difference between a battery providing energy and a battery providing power. In a data center, the UPS must discharge fast for a short period for maintaining operational continuity. This is what happens during brief outages, or until the backup generators spin up to take over the load. This is the most basic power battery operation, where the battery must deliver a high rate of discharge, and it does so with a small footprint.

On the other hand, Lead-acid and Lithium-ion technologies offer energy batteries. Their design allows them to discharge energy at a lower rate for longer periods. Electric vehicles utilize this feature, and the automotive industry is spending top dollars for increasing the energy density of such EV batteries so that the user can get more mileage or range from their vehicles. This is not very useful for data center backup, as the battery must have a higher energy storage footprint for supporting short duration high power output requirements.

This is where the Nickel-Zinc battery technology comes in. With an energy density nearly twice that of a Lead-acid battery, Nickel-Zinc batteries take up only half the space. Not only is the footprint reduced by half, but the weight also reduces by half for the same power output. As compared to Lithium-ion batteries, Nickel-Zinc batteries not only excel in footprint reduction, but they charge at a faster rate while retaining thermal stability. This feature makes them so useful for mission-critical facility uptime.

Nickel-Zinc batteries have proven their reliability as well. They have clocked over tens of millions of operating hours for providing uninterrupted backup power in mission-critical applications. Another feature very useful for data center operations is the battery string operations of the Nickel-Zinc technology.

When a Lithium-ion or a Lead-acid battery fails, the battery acts as an open circuit, preventing other batteries in the string from transferring power. On the other hand, a weal or a failed Nickel-Zinc cell remains conductive, allowing the rest of the string to continue operations, with a lower voltage. In emergency situations, this feature of the Nickel-Zinc battery is extremely helpful, as the faulty battery replacement can proceed with no operational impact and at a low cost.

In parallel operation also, Nickel-Zinc batteries are more tolerant of string imbalances, thereby maintaining constant power output at significantly lower states of health and charge as compared to batteries of other technologies.

Power Transmission Through Lasers

Wireless power transfer has considerable advantages. The absence of transmission towers, overhead cables, and underground cables is the foremost among them, not to exclude the expenses saved in their installation, upkeep, and maintenance. However, one of the major hurdles to wireless power transmission is the range it can cover. But now, Ericsson and PowerLight Technologies have provided a new proof of concept project that uses a laser beam to transmit power optically to a portable 5G base station.

Wireless power transmission is not a new subject to many. People use wireless power for charging many devices like earbuds, watches, and phones. But the range in these chargers is short, as the user must place the device on the pad of the charger. This limits the usefulness of the wireless charging station for transmitting power. Although labs have been experimenting with larger setups that can charge devices placed anywhere within a room, reports of beaming electricity outdoors and for long distances have been rather scarce.

PowerLight has been experimenting with wireless power transfer for quite some time now, and they have partnered with Ericsson, a telecommunications company, for a proof of concept demonstration. Their system consists of two components, a laser transmitter, and a receiver. The distance between the transmitter and receiver can vary from a few hundred meters to a few thousand meters.

However, unlike a Tesla coil, the PowerLight device does not transmit electricity directly. Instead, at the transmitter end, electricity powers a powerful laser beam, sending it directly to the receiver. In turn, the receiver uses specialized photocell arrays to convert the incoming laser back into electricity for powering connected devices.

Such a powerful laser-blasting through the open air can be a dangerous thing. Therefore, PowerLight has added many safeguards. They surround the beam with wide cylinders of sensors that can detect anything approaching. The sensors can shut off the beam within a millisecond, if necessary. In fact, the safety system is so fast that a flock of birds is not affected when flying through the laser beam, but there is an interruption at the receiver.  To overcome such fleeting interruptions, and cover longer-term disruptions as well, the PowerLight system has a battery back-up at the receiver end.

PowerLight is using their system to power a 5G radio base station from Ericsson, that has no other power source connected to it. The base station received 480 watts from the transmitter placed at a distance of 300 m. However, according to the PowerLight team, the technology can send 1000 watts over a distance of over 1 km. They also claim there is room for future expansions.

Wirelessly powering these 5G units could make them more versatile, as they will then become portable, and capable of operating in temporary locations. This will also allow them to operate in disaster areas, where there has been a disruption of infrastructure.

According to PowerLight, their optical power beaming technology may be useful in several other applications also, such as for charging electric vehicles, in future space missions, and in adjusting the power grid operations on the fly.

Where Do You Use Encoders?

All kinds of mechanical systems use a critical component commonly known as an encoder. Large industrial machines performing delicate work, high-precision prototyping, or repeatable tasks use encoders predominantly. Production of advanced electronics also requires the use of encoders. Encoders can be linear, angle, or rotary and the electronics sector uses them in some form or the other. Semiconductor fabrication, with its small components and work areas, requires encoders of the highest resolution and accuracy.

Production of electronics often uses vacuum environments with unique ventilation. These environments require special types of encoders, including linear and angle types made specifically to operate with the temperature and gaseous conditions prevalent with vacuum environments.

CNC machines must maintain their accuracy and position even when operating with heavy spindles and workpieces, high speeds, and multi-axis movements. All the components need to work together for accurate milling, drilling, and boring. Encoders play an important role in the synchronous working of CNC machines. For instance, custom linear encoders guide the travel of the axes of a milling machine.

At present, the automation industry is striding ahead rapidly and requires capable encoders. Strausak, a grinding machine company, makes robotic arms that manufacturing environments use universally. Unmanned mechanical systems must rely on accurate and consistent measurement and motion provided by encoders.

Automated transportation, such as high-speed trains in Sweden, depends on custom-made absolute encoders. These encoders operate a redundant system for automatically controlling the speed and braking of the train when necessary.

The medical industry requires precision and accuracy along with safety for testing and treating the human body while developing new procedures in the lab. CT and MRI scanning machinery use exposed linear and rotary encoders for precision imaging and maintaining patient safety. Precision angular and linear encoder technology help radiation therapy, leaving no room for error.

For instance, GammaPod, the most advanced breast cancer treatment in the world, depends on absolute rotary encoders for operating its stereotactic radiotherapy system. The medical industry depends on encoders predominantly because of the precision necessary for safely and accurately testing and treating the human body.

Robotics often uses articulating arms for picking and placing objects and equipment in manufacturing plants. They also use mobile, guided, and automated robots, which, in turn, require encoders for their proper functioning. For instance, encoders provide automated systems with the necessary and effective position and speed feedback for allowing them to function with minimum human intervention. Robotics often uses low-profile encoders that can fit inside small robotic arms.

All types of encoders are available for serving the general purpose of measuring motion and providing signaling feedback. However, their capabilities, configurations, and applications vary significantly and widely. In every facet of life, encoders play a significant role. This is especially applicable in the industrial and technological world, where safety, accuracy, and precision are important parameters to uphold.

Knowledge of the encoder transfer function is important for selecting the proper resolution for incremental optical encoders and for tuning the regulator depending on the speed and torque of the application. The implementation of a proper control loop impacts the stability and performance of the application.