Category Archives: Guides

E-Fuse Future Power Protection

High-voltage eMobility applications are on the rise. Traditionally, fuses are non-re-settable, and sometimes mechanical relays or contactors are used. However, that is now changing. Semiconductor-based re-settable fuses or eFuses are now replacing traditional fuses.

These innovative eFuses represent a significant trend in safeguarding hardware and users in high-voltage and high-power scenarios. Vishay has announced a reference design for an eFuse that can handle high power loads. They have equipped the new eFuse with SIC MOSFETs and a VOA300 optocoupler. The combination can handle up to 40 kW of continuous power load. The design is capable of operating at full power with minimal losses of lower than 30 W without active cooling. The eFuse incorporates important essential features like continuous current monitoring, a preload function, and rapid overcurrent protection.

Vishay has designed the eFuse to manage the safe connection and disconnection of a high-voltage power source. For instance, the eFuse can safely connect or disconnect various vehicle loads safely to and from a high-energy battery pack. The eFuse uses SIC MOSFETS as its primary switches, and these are capable of continuous operation up to 100 Amperes. The user can predefine a current limit. When the current exceeds this limit, the eFuse disconnects the load rapidly from the power source, safeguarding the user and the power source or battery pack. In addition, the presence of a short circuit or an excessive load capacitance during power-up causes the eFuse to initiate an immediate shutdown.

The basic design of the eFuse is in the form of a four-layer, double-sided PCB or printed circuit board of 150 mm x 90 mm. Each layer has thick copper of 70 µm thickness, as against 35 µm for regular PCBs. The board has some connectors extending beyond its edges. The top side of the PCB has all the high-voltage circuitry, control buttons, status LEDs, multiple test points, and connectors. The PCB’s bottom side has the low-voltage control circuitry. It is also possible to control the eFuse remotely via a web browser.

To ensure safety, the user must enable the low-voltage power supply in the first place. They can follow this up by enabling the high-voltage power supply on the input. For input voltages exceeding 50 V, an LED indicator lights up on the board. Vishay has added two sets of six SIC MOSFETS with three connected in parallel in a back-to-back configuration. This ensures the eFuse can handle current flow in both directions. A current-sensing shunt resistor, Vishay WSLP3921, monitors the current flowing to the load. Vishay has positioned the current sensing shunt resistor strategically between the two parallel sets of MOSFETs.

Vishay has incorporated convenient control options in the eFuse. Users can operate the control options via the push buttons on the PCB, or by using the external controller, Vishay MessWeb. Either way unlocks access to an expanded array of features. Alternately, the user can integrate the eFuse seamlessly into a CAN bus-based system. They can do this by using an additional chipset in conjunction with the MessWEB controller. Vishay claims to have successfully tested its reference eFuse design.

What is DFMEA?

If you are just entering the world of design, you will have to face a session of DFMEA some time or the other. DFMEA is an acronym for Design Failure Mode and Effects Analysis. In recent years, corporate settings are using DFMEA, a subset of FMEA or failure mode and effects analysis, as a valuable tool. It helps engineers spot potential risks in product design before they make any significant investments.

Engineers are using DFMEA as a systematic tool for mapping the early-warning system of a product. They use it to make sure the product functions not only as they intend it to, but also to keep users happy. It is like taking a peek into the future, catching any design flaws before they cause any major damage. Simply put, DFMEA helps to check the overall design of products and components, figuring out anything that might go wrong, and the way to fix it. This tool is specifically useful in industries involved with manufacturing, where it is important to prevent failure.

To use DFMEA effectively, the designer must look for potential design failures, observing them from all angles. Here is how they do it.

They first look for a failure mode, which essentially means how the design could possibly fail. For instance, your computer might freeze up when you open too many programs, which is one mode or type of failure.

Then they look for why the failure mode should happen. This could be due to a design defect, or a defect in the quality, system, or application of the part.

Next, the designers look for an effect of the failure. That is, what happens when there is a failure. In our example, a frozen computer can lead to a frustrated user.

In the last stage, designers look for the severity of the failure. They estimate how bad the failure could be for safety, quality, and productivity. Designers typically look for the worst-case scenarios.

To put it in a nutshell, DFMEA helps engineers figure out not only potential issues, but also the consequences of the failures. This way, they can prevent failures from happening in the first place.

However, DFMEA is never a one-man show. Rather, it is a team effort. Typically, the team has about 4 to 6 members—those who are fully knowledgeable about the product—and led by a product design engineer. The team members could include engineers with material background, and those from product quality, testing, and analysis. There may be people from other departments, like logistics, service, and production.

DFMEA is an essential tool in any design process. However, it is a crucial tool in industries handling new products and technology. This includes industries such as software, healthcare, manufacturing, industrial, defense, aerospace, and automotive. DFMEA helps them locate potential failure modes, reducing risks involved with introducing new technologies and products.

The entire DFMEA exercise is a step-by-step process and the team must think through each step thoroughly before they movie on to the next. It is essential they look for and identify the failure, and find out its consequences, before finding out ways to prevent it from happening.

What is Voice UI?

Although we usually talk to other humans, our interactions with non-animated objects are almost always silent. That is, until the advent of the Voice User Interface or Voice UI or VUI technology. Now, Voice UI has broken this silent interaction between humans and machines. Today, we have virtual assistants and voice-controlled devices like Siri, Google Assistant, Hound, Alexa, and many more. Most people who own a voice-controlled device say it is like talking to another person.

So, what is Voice UI? The Voice UI technology has made it possible for humans to interact with a device or an application through voice commands. As we are increasingly using digital devices, screen fatigue is something we have all experienced often. This has led to the development of a voice user interface. The advantages are numerous—primarily, hands-free operation and control over the device or application without having to stare at a screen. Leading five companies of the world, Amazon, Google, Microsoft, Apple, and Facebook, have developed their respective voice-activated AI assistants and voice-controlled devices.

Whether it is a voice-enabled mobile app, an AI assistant, or a voice-controlled device like a smart speaker, voice interactions and interfaces have become incredibly common. For instance, according to a report, 25% of adults in the US own a smart speaker, and 33% of the US population use their voice for searching online.

How does this technology work? Well, under the hood, there are several Artificial Intelligence technologies at work, such as Automatic Speech Recognition, Name Entity Recognition, and Speech Synthesis. The VUI speech components and the backend infrastructure are backed by AI technologies and typically reside in a public or private cloud. It is here that the VUI processes the speech and voice of the user. After deciphering and translating the user’s intent, the AI technology returns a response to the device.

The above is the basics of the Voice UI technology, albeit in a nutshell. For a better user experience, most companies also include additional sound effects and a graphical user interface. The sound effects and visuals assist the user in knowing whether the device is listening to them, or processing before responding and responding.

Today Voice UI technology is widespread, and it is available in many day-to-day devices like Smartphones, Desktop Computers, Laptops, Wearables, Smartwatches, Smart TVs, Sound Systems, Smart Speakers, and the Internet of Things. However, everything has advantages and disadvantages.

First, the advantages. VUI is faster than having to type the commands in text, and more convenient. Not many are comfortable typing commands, but almost all can use their voice to request a task from the VUI device. Voice commands, being hands-free, are useful while cooking or driving. Moreover, you do not need to face or look at the device to send voice commands.

Next, the disadvantages. There are privacy concerns, as a neighboring person can overhear your commands. AI technology is still in its infancy, and is prone to misinterpretation or being inaccurate, especially when differentiating homonyms like ‘their’ and ‘there’. Moreover, voice assistants may find it difficult to decipher commands in noisy public places.

What is UWB Technology?

UWB is the acronym for Ultra-Wideband, a 132-year-old communications technology. Engineers are revitalizing this old technology for connecting wireless devices over short distances. Although more modern technologies like Bluetooth are available for the purpose, industry observers are of the opinion that UWB can prove to be more versatile and successful than Bluetooth is. According to them, UWB has superior speed, uses less power, is more secure, provides superior device ranging and location discovery, and is cheaper than Bluetooth is.

Therefore, companies are researching and investing in UWB technology. This includes names like Xtreme Spectrum, Bosch, Sony, NXP, Xiaomi, Samsung, Huawei, Apple, Time Domain, and Intel. As such, Apple is already using UWB chips in their iPhone 11. This is allowing Apple obtain superior positioning accuracy and ranging, as it uses time of flight measurements.

Marconi’s first man-made radio using spark-gap transmitters used UWB for wireless communication. The government banned UWB signals for commercial use in 1920. However, since 1992, the scientific community started paying greater attention to the UWB technology.

UWB or Ultra-Wideband technology offers a protocol for short-range wireless communications, similar to what Wi-Fi or Bluetooth offer. It uses short pulse radio waves over a spectrum of frequencies that range from 3.1 to 10.5 GHz and does not require licensing for its applications.

In UWB, the bandwidth of the signal is equal to or larger than 500 MHz or is fractionally greater than 20% of the fractional bandwidth around the center frequency. Compared to conventional narrowband systems, the very wide bandwidth of UWB signals leads to superior performance indoors. This is because the wide bandwidth offers significantly greater immunity from channel effects when used in dense environments. It also allows very fine time-space resolutions resulting in highly accurate indoor positioning of the UWB devices.

As its spectral density is low, often below environmental noise, UWB ensures the security of communications with a low probability of signal detection. UWB allows transmission at high data rates over short distances. Moreover, UWB systems can comfortably co-exist with other narrowband systems already under deployment. UWB systems allow two different approaches for data transmission.

The first approach uses ultra-short pulses—often called Impulse radio transmission—in the picosecond range, covering all frequencies simultaneously. The second approach uses the OFDM or orthogonal frequency division multiplexing for subdividing the entire UWB bandwidth to a set of broadband channels.

While the first approach is cost-effective, there is a degradation of the signal-to-noise ratio. Impulse radio transmission does not involve a carrier; therefore, it uses a simpler transceiver architecture as compared to traditional narrowband transceivers. For instance, the UWB antenna radiates the signal directly. An example of an easy to generate UWB pulse is using a Gaussian monocycle or one of its derivatives.

The second approach offers better performance as it significantly uses the spectrum more effectively. Although the complexity is higher as the system requires more signal processing, it substantially improves the data throughput. However, the higher performance comes at the expense of higher power consumption. The application defines the choice between the two approaches.

Are Lithium Iron Phosphate Batteries Better?

According to the latest news from developments in batteries, the LFP or Lithium Iron Phosphate battery technology is going to pose a serious challenge to that of the omnipresent Lithium-ion type.

As far as e-mobility is concerned, Lithium-ion batteries have some serious disadvantages. These include higher cost and lower safety as compared to other chemistries. On the other hand, recent advancements in battery pack technology have led to an enhancement in the energy density of LFP batteries so that they are now viable for all kinds of applications related to e-mobility—not only in vehicles but also in shipping, such as in battery tankers.

In their early years of development, LFP cells had a lower energy density as compared to those of Lithium-ion cells. Improved packaging technology had bumped up the energy density to about 160 Wh/kg, but this was still not enough for use in e-mobility applications.

With further improvements in technology, LFP batteries now operate better at low temperatures, charge faster, and have a longer cycle life. These features are making them more appealing for many applications, including their use in electric cars and in battery tankers.

However, LFP batteries still continue to face several challenges, especially in applications involving high power. This is mainly due to the unique crystal structure of LFP, which reduces its electronic conductivity. Scientists have been experimenting with different approaches, such as reducing the directional crystal growth or particle size, using different conductive layer coatings, and element doping. These have not only helped to improve the electronic conductivity but have increased the thermal stability of the batteries as well.

Comparing LFP batteries with the Lithium-ion types shows them to have individual advantages in different key characteristics. For instance, Lithium-ion batteries offer higher cell voltages, higher power density, and better specific capacity. These characteristics lead to Lithium-ion batteries offering higher volumetric energy density suitable for achieving longer driving ranges.

In contrast, LFP batteries offer a longer cycle life, better safety, and better rate capability. As the risk of thermal runaway, in case of mechanical damage to a cell, is also much lower, these batteries are now popularly used for commercial vehicles with frequent access to charging, such as scooters, forklifts, and buses.

It is also possible to fully charge LFP batteries in each cycle, in contrast to having to stop at 80% to avoid overcharging some type of Lithium-ion batteries. Although this does allow simplification of the battery management algorithm, it adds other complexities for Battery Management Systems managing LFP cells.

Another key advantage of LFP batteries is they do not require the use of cobalt and nickel in their anodes. The industry fears that in the coming years, sourcing these metals will be more difficult. Even with mining projections of both elements doubling by 2030, they may not meet the increase in demand.

All the above is making the LFP batteries look increasingly interesting for e-mobility applications, with more car manufacturers planning to adapt them in their future cars.

What are Flash SSDs?

Earlier we used traditional hard disk drives in our computers. These were mechanically spinning magnetic disks with read-write heads. Nowadays, we use SSD or Solid-State Drives that have no moving parts. SSDs can retain data once it is saved without power, as they use NAND flash memory. To increase the data density, the NAND chips are multilayered. That means they can hold upwards of one bit of information per cell. SSDs using multilayered chips are named single-, multi-, triple-, quad-, and penta-level SSDs, according to the number of bits each cell can hold.

Multilayered SSDs have their own advantages and shortcomings and can range from speed to price to reliability. For instance, SLC or Single Level Cells have a lifespan measured in program/erase cycles of about 50,000 to 100,000 and can withstand high-intensity write operations.

MLCs or multi-level cells with two bits per cell can expect a lifespan of about 10,000 cycles and are mostly suitable for enterprise data centers.

TLCs or triple-level cells with three bits per cell can expect a lifespan of about 3,000 cycles and are useful for digital consumer products.

QLCs or quad-level cells with four bits per cell can expect a lifespan of about 2,000 cycles and are suitable for read-heavy operations, streaming media, and content delivery applications.

No data is available for the lifespan of PLCs or penta-level cells with five bits per cell. These SSDs are suitable for long-term storage of data such as in data archives.

Flash SSDs have revolutionized the storage of enterprise data in all its forms. They have enabled faster boot times and the application starts on PCs and mobile devices. They have facilitated the blistering performance of storage arrays in workloads like business analytics. In most performance metrics, flash SSDs have far outshone the older hard disk drives.

Speed aside, flash SSDs offer additional benefits. They are far more durable while being less susceptible to damage from abrupt physical shocks and movements, as compared to the traditional HDDs. Additionally, they use much less power to operate. Even though they cost more than the HDDs per gigabyte, the improved performance of SSDs overcomes their higher expense for most applications.

Flash SSDs store data in their memory cells using a technique called FGT or Floating Gate Metal Oxide Field Effect Transistors that can store a binary 0 or 1. With two gates, each FGT behaves like an electrical switch with current flowing between two points. NAND flash is named so as it uses NOT-AND logic gates. Power is not necessary to retain data in the flash cells, as, in the absence of power, the FGT provides the electrical charge for maintaining the data intact in the memory cells.

Flash SSDs are solid state, meaning they have no mechanical part to wear out. However, SSDs can nonetheless fail. One measure of SSDs is the lifespan or the number of program/erase cycles that a drive can complete before its degradation and failure. This is overcome by using wear-leveling technology, whereby the life of the SSD is prolonged by evenly distributing the program/erase cycle across the total NAND cells in the drive.

New Circuit Protection Technologies

A wide variety of vehicle models is entering the EV market these days. The demand is for decreased charging times and increased range. This is heightening not only the challenges towards electrical system performance but also towards better circuit protection.

For instance, decreasing the charging times requires systems using higher voltages and higher currents. This has necessitated the shift from the 400 V system to the 800 V, bringing with it major challenges to the design of circuit protection, especially on the battery side. That is because manufacturers must now consider increased fault currents that the protection components must handle.

With motor currents and power ramping up, circuit protection and switching devices also face higher stresses. They now need to withstand not only the higher operating currents but also the higher cycling requirements. Increased range means higher fault currents.

Therefore, circuit protection requirements are moving in several directions simultaneously. SiC MOSFET switches, acting as solid-state resettable transistor switches, address the high-voltage, low-current subsystems.

The power distribution box in the vehicle is still using the conventional system architecture of a coordinated fuse and contactor. Coordination between the two is necessary to ensure they cover the full range of possible faults from a range of underlying causes including different states of charge of the battery.

Another circuit protection technique is the pyrotechnic approach. This comes into play in events of a catastrophic nature, such as in crashes, when it is necessary to physically cut the busbar. These systems are mostly triggered by circuits that deploy the airbag and work to quickly isolate the battery from the rest of the vehicle. This helps to protect the driver, the passengers, and the first responders from fire and explosion from short circuits through the body of the vehicle.

The above are leading to the development of newer types of protection, such as with breaktors, fully coordinating circuit protection, and switching. Its design allows the breaktor to trigger passively or it can actively interrupt in case of power loss, thereby improving the functional safety of critical protection systems. Moreover, it has the ability to reset itself.

Another is an automotive precision bidirectional eFuse, which is increasingly becoming a common device in a vehicle. Traditional automotive fuses can be low in accuracy and slow to react. This can be a safety issue, as the safety of the system is indirectly proportional to the response time of a fuse. An eFuse not only has high accuracy but also a low response time, which increases the safety of the system.

However, there is a durability issue related to fuses and contactors that vehicle manufacturers use. The solution for this is the pyrotechnical switch. This is a protection device based on a trigger-able circuit similar to the functioning of an airbag. It produces a controlled explosion to sever a conducting busbar. Pyrotechnical switches, while solving the challenge of coordination, must rely on accurate triggering rather than on the passive reaction of fuses. Additional components are necessary to ensure a reliable triggering.

All the above protection systems require a trade-off between speed and durability. While a big fuse can be slow to operate, a smaller one may be faster but may suffer from a fatigue risk.

CP Coolers for Storing Reagents

In analytical chemistry, various reagents are necessary to detect the presence or absence of a substance, or for checking the occurrence of a specific reaction. For identifying or measuring a target substance, medical and laboratory technicians need to use reagents that cause a biological or chemical reaction to occur. For instance, biotechnologists consider oligomers, model organisms, antibodies, and specific cell lines as reagents for identifying and manipulating cell matter. Such reagents, especially those that biotechnologists use, have narrow operating temperature windows and therefore, require freezing or refrigeration.

If kept at room temperature, these temperature-sensitive reagents may degrade, becoming contaminated by microbial growth, thereby affecting their testing integrity. Most of these reagents will degrade and deteriorate within hours if stored without proper and precise refrigeration. Moreover, some reagents will be negatively affected if the storage temperature is tool low, or if they are subjected to multiple thaw-freeze cycles. Precise monitoring and stabilization of temperature below ambient is critical for extending the life of reagents, ensuring the accuracy and reliability of medical and laboratory tests, and keeping replacement costs down.

Manufacturers are using thermoelectric-based cooling solutions for precise temperature control. These are solid-state heat-pump devices, moving heat via the thermoelectric effect. In operation, direct current flowing through the cooler creates a temperature differential across the module. This allows one side of the thermoelectric cooler to get cold, suitable for heat absorption, while the other side heats up, making it possible to dissipate heat.

In actual operation, manufacturers typically connect thermoelectric coolers to forced convection heat sinks on the hot side to help dissipate the heat to the ambient. The action is reversible, such as by reversing the current flow, the thermoelectric cooler can be made to heat the cold side. Adequate control circuitry and the dual capability of the thermoelectric cooler enables capabilities of precise temperature control in the unit.

Compared to regular technologies like compressor-based systems, thermoelectric coolers such as the CP10-31-05 from Laird Thermal Systems Solutions deliver accurate temperature control in a more compact, stable, efficient, and reliable package. No refrigerants are necessary for the operation, making them friendly to the environment.

Featuring no moving parts and solid-state construction, the CP series of thermoelectric coolers operate extremely reliably, with no noise, and at low power. Their small footprint allows designers to increasingly integrate them into various instruments with easy flexibility and because of their solid-state operation, they can mount them in any orientation.

The Laird Thermal Systems Solutions has designed their CP series as compact and rugged thermoelectric cooling products. They operate at higher currents, making them suitable for large heat-pumping applications like storage systems for reagents. Designers mount the CP series of coolers near the storage chamber for accurately and closely regulating the temperature within the reagent chamber. The CP series of coolers offer a direct-to-air configuration, with a maximum cooling power of about 125 Watts and a temperature differential of 67 °C at ambient temperatures of 25 °C.

The CP series of thermoelectric coolers are available in a wide range of capacities, shapes, and power ranges for meeting the wide range of requirements suited to reagent cooling.

Differences Between Brushed and Brushless Motors

Motors govern our lives in multiple ways. They are the basic machines assisting us from simple transportation to sophisticated movement of a large variety of tools. There are many types of motors, both for operating on alternating current and direct current supplies. Of the motors operating on direct current supplies, there are two major categories—brushed and brushless—with differences in their construction, structure, and operation affecting their performance.

Both brushed and brushless motors operate using the principles of EM or electromagnetic induction, converting electrical energy to mechanical rotary movement. Both types of motors allow electricity to pass through copper windings, thereby creating interacting electromagnetic fields that cause the rotor to rotate and produce mechanical energy. However, their design concepts are different, making them differ in performance, cost, and maintenance.

Of the two, the brushed motor is the older design, having been available for over a century. These have a simplistic structure with two coils, one on the stator and the other on the rotor. A pair of carbon brushes delivers power to the coils on the rotor. Typically, brushed motors have four major parts—stator, rotor, commutator, and brushes.

The stator is the stationary part of the motor. It contains the stator windings or permanent magnets. The rotor, as the name suggests, is the rotating part, attached to the shaft. It has several rotor coils that, when powered, create an electromagnetic field to interact with the EM field of the stator. The commutator is a sectioned metal ring to ensure each rotor winding receives power as it rotates. It helps in reversing the polarity of the current through the rotor windings every half turn of the rotor. Brushes are stationary carbon electrodes that feed power to the rotor windings through the commutator.

As current passes through the stator and rotor windings, depending on their relative positioning, their EM fields either attract or repel each other. This makes the rotor turn, and thereby, changes the commutator connection to the brushes. The current flow now passes through a newer rotor coil and propels the rotor further in the same direction as before. This goes on until the rotational friction balances the EM interaction, at which point the motor’s rotational speed stabilizes.

Once transistors became more common in electronics, brushless motors started gaining popularity. Brushless motors also have four major parts—stator, rotor, sensors, and control circuits. Here too, the stator is the stationary part of the motor and has several copper coils, which, when powered, generate EM fields. The rotor is the moving part attached to the shaft of the motor. But rather than coils, the rotor has permanent magnets that generate their own EM fields. Hall-Effect type sensors sense the position of the coils with respect to the rotor magnets. The control circuit replaces the commutator and brushes to decide which coils in the stator should be powered next.

Brushless motors are more efficient as compared to brushed motors, and they provide higher torque, faster acceleration, lower noise, and lower maintenance. However, brushless motors are more expensive and heavier.

The Electronic Vampire Power Loss

As the use of smart electronic gadgets increases in our lives, we have grown used to having them available for use at any instant necessary. Nowadays, no one is ready to switch on a piece of equipment and wait for it to become operational—we need them instantly on and active. This functionality means the equipment must remain always on, consuming power.

However, this posed an additional problem for battery-powered equipment, as the always-on status drained batteries very fast. Therefore, the design of electronic equipment required a standby status, which reduced its power consumption to a substantially low level. This standby power loss is also known as vampire power loss.

Considering the total number of electronic equipment each one of us uses at home, at the office, on the move, etc., the total amount of vampire power loss is a substantial amount, enough to strain our infrastructure at the power level, while costing people and businesses lost money in wasted energy. This is largely on account of electronic devices being constantly connected and being on standby when not in direct use.

For instance, a legacy consumer product like a TV set, could consume upwards of several hundred dollars every year. Almost all modern products can waste money. That means an apartment building may be wasting thousands of dollars a year on products only waiting for their owners to use them. This not only affects operating costs but also impacts performance aspects like power factor corrections to every home.

Energy Star, an initiative of the US DOE or Department of Energy and the US Environmental Protection Agency program is addressing this issue. One can find the Energy Star label on over 75 certified product categories at homes, commercial buildings, and industrial plants. Another is the EU Directive 92/75/EC, which established a labeling scheme of energy consumption—mostly for white goods, cars, and televisions—that must display an EU Energy Label.

Design engineers are addressing this vampire power loss in two ways—a top-down approach, and a bottom-up approach. The top-down approach uses advanced circuit topology that is microcontroller-based. The microcontroller manages closely each on-chip peripheral and shuts down any unnecessary components like the display driver when entering standby mode. However, this strategy is useful in larger circuits, where there are many non-essential subsystems to be powered down when not in use.

Although the top-down approach is necessary and important, this is mostly a reactive approach. The method’s address of energy consumption is based on its power circuit. If the power circuit is not effective, the overall performance of the system remains limited.

On the other hand, the bottom-up approach begins with the power electronic components on the board. In this approach, operating at a higher efficiency level and using advanced subsystem power management has a much better effect when using a low standby power baseline. Using a system with better efficiency at its base level, the designer can effectively leverage several circuit-optimization methodologies. For instance, modern switching transistors offer performances that bring a cascading benefit to the rest of the subsystem.