Category Archives: Guides

Efficiency and Performance of Edge Artificial Intelligence

Artificial Intelligence or AI is a very common phrase nowadays. We encounter AI in smart home systems, in intelligent machines we operate, in the cars we drive, or even on the factory floor, where machines learn from their environments and can eventually operate with as little human intervention as possible. However, for the above cases to be successful, it was necessary for computing technology to develop to the extent that the user could decentralize it to the point in the network where the system generates data—typically known as the edge.

Edge artificial intelligence or edge AI makes it possible to process data with low latency and at low power. This is essential, as a huge array of sensors and smart components forming the building blocks of modern intelligent systems can typically generate copious amounts of data.

The above makes it imperative to measure the performance of the edge AI deployment to optimize its advantages. To gauge the performance of the edge AI model requires specific benchmarks that can indicate its performance based on standardized tests. However, there are nuances in edge AI applications, as the application itself often influences the configuration and design of the processor. Such distinctions often prevent using generalized performance parameters.

In contrast with data centers, a multitude of factors constraint the deployment of edge AI. Among them, the primary factors are its physical size and power consumption. For instance, the automotive sector is witnessing a huge increase in electric vehicles with a host of sensors and processors for autonomous driving. Manufacturers are implementing them within the limited capacity of the battery supply of the vehicle. In such cases, power efficiency parameters take precedence.

In another application, such as home automation, the dominant constraint is the physical size of the components. The design of AI chips, therefore, must use these restrictions as guidelines, with the corresponding benchmarks reflecting the adherence to these guidelines.

Apart from power consumption and size constraints, the deployment of the machine learning model will also determine the application of the processor. Therefore, this can impose specific requirements when analyzing its performance. For instance, benchmarks for a chip in a factory utilizing IoT for detecting objects will be different from a chip for speech recognition. Therefore, estimating edge AI performance requires developing specific benchmarking parameters that showcase real-world use cases.

For instance, in a typical modern automotive application, sensors like computer vision, LiDAR, etc., generate the data that the AI model must process. In a single consumer vehicle fitted with an autonomous driving system, this can easily amount to generating two to three terabytes of data per week. The AI model must process this huge amount of data in real-time, and provide outputs like street sign detection, pedestrian detection, vehicle detection, and so on. The volume of data the sensors produce depends on the complexity of the autonomous driving system, and in turn, determines the size and processing power of the AI core. The power consumption of the onboard AI system depends on the quality of the model, and the manner in which it pre-processes the data.

Differences between USB-PD and USB-C

With all the electronic devices we handle every day of our lives, it is a pain to handle an equally large number of cables for charging them and transferring data. So far, a single standard connector to rule all the gadgets has proven to be elusive. A format war opens up, with one faction emerging victorious for a few years, until overtaken by another newer technology. For instance, Betamax overtook VHS, then DVD ousted Betamax, until Blu-ray overtook the DVD, and Blu-ray is now hardly visible with the onslaught of online streaming services.

As suggested by its acronym, the Universal Serial Bus, USB-C has proven to be different and possibly even truly universal. USB-C ports are now a part of almost all manner of devices, from simple Bluetooth speakers to external hard drives to high-end laptops and ubiquitous smartphones. Although all USB-C ports look alike, they do not offer the same capabilities.

The USB-C, being an industry-standard connector, is capable of transmitting both power and data on a single cable. It is broadly accepted by the big players in the industry, and PC manufacturers have readily taken to it.

USB-PD or USB Power Delivery is a specification for allowing the load to program the output voltage of a power supply. Combined with the USB-C connector, USB-PD is a revolutionary concept as devices can transmit both data and power as the adapter adjusts to the power requirements of the device to which it connects.

With USB-PD, it is possible to charge and power multiple devices, such as smartphones and tablets, with each device drawing only the power it requires.

However, USB-C and USB-PD are two different standards. For instance, the USB-C standard is basically a description of the physical connector. Using the USB-C connector does not imply that the adapter has USB-PD capability. Therefore, anyone can choose to use a USB-C connector in their design without conforming to USB-PD. However, with a USB-C connector, the user has the ability to transfer data and moderate power (less than 240 W) over the same cable. In addition, the USB-C connector is symmetrical and self-aligning, which makes it easy to insert and use.

Earlier USB power standards were limited, as they could not provide multiple levels of power for different devices. Using the USB-PD specifications, the device and the power supply can negotiate for optimum power delivery. How does that work?

First, each device starts with an initial power level of up to 10 W at 5 VDC. From this point, power negotiations start. Depending on the needs of the load, the device can transfer power up to 240 W.

In the USB-PD negotiation, there are voltage steps starting at 5 VDC, then at 9 VDC, 15 VDC, and 20 VDC. Beyond this, the device supports power output starting from 0.5 W up to 240 W, by varying the current output.

With USB-PD, it is possible to handle higher power levels at the output, as it allows a device to negotiate the power levels it requires. Therefore, USB power adapters can power more than one device at optimum levels, allowing them to achieve faster charge times.

Importance of Vibration Analysis in Maintenance

For those engaged in maintenance practices, it is necessary to ensure the decision to replace or repair comes much before a complete system failure of key components. Vibration analysis is the easiest way to mitigate this risk.

With vibration analysis, it is possible to detect early signs of machine deterioration or failure. This allows in-time replacement or repair of machinery before any catastrophe or systemically functional failure can occur.

According to Physical laws, all rotating machinery vibrates. As components begin to deteriorate or reach the end of their serviceable life, they begin to vibrate differently, and some may even begin to vibrate more strongly.

This makes analyzing vibration so important while monitoring equipment. Using vibration analysis, it is possible to identify many known modes of failure that are indicators of wear and tear. It is also possible to assess the extent of future damage before it becomes irretrievable and impacts the business or its finances.

Therefore, vibration monitoring and analysis can detect machine problems like process flow issues, electrical issues, loose fasteners, loose mounts, loose bolts, component or machine balances, bent shafts, gear defects, impeller operational issues, bearing health, misalignment, and many more.

In the industry, vibration analysis helps in avoiding serious equipment failure. Modern vibration analysis offers a comprehensive snapshot of the health of a specific machinery. Modern vibration analyzers can display the complete frequency spectrum of the vibration with respect to time for the three axes simultaneously.

However, for interpreting this information properly, the person analyzing the information must understand the basics of the analysis, the failure modes of the machine, and their application.

For this, it is necessary to ensure the gathering of complete information. It is essential to gather a full vibration signature from all three axes, the axial, vertical, and horizontal axes, not only for the driven equipment but also for both ends of the driver motor. It is also necessary to ensure the capability to resolve all indications of failure from the dataset.

Furthermore, it is possible that busy personnel take a read on only one axis. However, this may be problematic, as the problem may be existing in any one of the three axes. Unless testing all three axes, there is a good chance of missing the issue. Comprehensive and careful analysis of the time waveform can predict several concerns.

This also makes it possible and easier to predict issues and carry out beneficial predictive maintenance successfully. In the industry, the importance of reactive maintenance is immense. The industry calls this the run till failure approach. In most cases, they fix the concern after it happens.

To make reactive maintenance as effective as possible in the long run, monitoring, and vibration analysis are essential. The approach helps to ensure the detection of problems at the beginning of failure. That makes fixing the issue cheaper, easier, and faster.

On the other hand, there is a completely opposite approach, that of predictive maintenance. This involves monitoring the machinery while it is operating. The purpose is to predict the parts likely to fail. Vibration analysis is a clear winner here as well.

What is a Reed Relay?

A reed relay is basically a combination of a reed switch and a coil for creating a magnetic field. Users often add a diode for handling any back EMF from the coil, but this is optional. The entire arrangement is very low cost and a simple device to be manufactured.

The most complex construction in the reed relay is the reed switch. As the name suggests, the switch has two reed-shaped metal blades made of a ferromagnetic material. A glass envelope encloses the two blades, holding them in place facing each other, and providing a hermetic seal preventing entry of contaminants. Typically, reed switches have open contacts in a normal state, meaning the two metal blades do not touch when not energized.

The presence of a magnetic field along the axis of the reed switch induces the reeds to magnetize, which attracts them to each other. The reeds, therefore, bend to close the gap. If the applied field is strong enough, the blades bend to touch each other, thereby forming an electrical contact.

The only movement within the reed switch is the bending of the blades. The reed switch has no part that slides past another or pivot points. Therefore, it is safe to say the reed switch has no moving parts that may wear out mechanically. Moreover, an inert gas surrounds the contact area within the hermetically sealed glass tube. For high-voltage switches, a vacuum replaces the inert gas. With the switch area being enclosed against external contaminants, the reed switch has an exceptionally long working life.

The size of a reed switch is a design variable. In longer switches, in comparison with shorter switches, the reeds do not need to deflect much to close a given gap between the blades. To make the reeds in more miniature switches bend more easily, they need to be made of thinner material, and this has an impact on the switch’s current rating. However, small switches allow for more miniature reed relays, which are useful in tighter spaces. On the other hand, larger switches are mechanically more robust, can carry higher currents, and have a greater contact area (lower contact resistance).

A magnetic field, of adequate strength, is necessary to operate a reed relay. It is possible to operate a reed relay by bringing a permanent magnet close to it. However, in the field, a coil surrounding the reed relay typically generates the magnetic field. A control signal forces a current through the coil, which creates the axial magnetic field necessary for closing the reed contacts.

Different models of reed switches need different levels of the magnetic field to make them operate and close the contacts. Manufacturers specify this in ampere-turns or AT, which is the product of current flow and the number of turns in the coil. Therefore, there is a huge variation in the characteristics of the reed relays available. A higher voltage or power level is necessary for stiffer reed relays and those with larger contact gaps. These require higher AT levels to operate, as the coils require more power.

LDOs for Portables and Wearables

As electronic devices get increasingly smaller in form factor, they are also becoming more portable and relying more on battery power. These devices include security systems, fitness trackers, and Internet of Things or IoT devices. The design of such tiny devices demands high-efficiency power regulators that can make use of every milliwatt of power from each charge for extending the working life of the device. The efficiency of traditional linear regulators and switch-mode power regulators falls woefully short of the requirements. Moreover, transient voltages and noise in switch-mode power regulators are detrimental to their performance.

The most recent addition to switching and linear regulators is the LDO or the low-dropout voltage regulator. It lowers thermal dissipation while improving efficiency by operating with a very low voltage drop across the regulator. Low-to-medium power applications are well-served by various types of LDOs, as they are available in minuscule packages of 3 x 3 x 0.6 mm. In addition, there are LDOs with fixed or adjustable output voltages, including some versions with on-off control of the output.

A voltage regulator must maintain a constant output voltage even when the source or load voltages change. Traditional voltage regulator devices operate in one of two ways—linear or switched mode. While LDO regulators are linear regulators, they operate with a very low voltage difference between their output and input terminals. As with other linear voltage regulators, LDOs also function with feedback control.

This feedback control of the LDO functions via a resistive voltage divider that scales the output voltage. The scaled voltage enters an error amplifier that compares it to a reference voltage. The resulting output of the error amplifier drives the series pass element to maintain the output terminal with the desired voltage. The dropout voltage of the LDO is the difference between the input and output voltages, and this appears across the series pass element.

The series pass element of an LDO functions like a resistor whose value varies with the applied voltage from the error amplifier. LDO manufacturers use various devices for the series pass element. It can be a PMOS device, NMOS device, or a PNP bipolar transistor. While it is possible to drive into saturation the PMOS and PNP devices, the dropout voltage for PMOS-type FET devices depends on the drain-to-source on resistance. Although each of these devices has its own advantages and disadvantages, using PMOS devices for the series pass element has the lowest implementation cost. For instance, positive LDO regulators from Diodes Incorporated offer LDOs with PMOS pass devices featuring dropout voltages of about 300 mV, when their output voltage is 3.3 V and the load current is 1 A.

The output of the LDO must have an output capacitor. The inherent ESR or effective series resistance of the capacitor affects the stability of the circuit. That means the capacitor used must have an ESR of 10 ohms or lower for guaranteeing stability covering the entire operating temperature range. Typically, these capacitors are of the type multilayer ceramic, solid-state E-CAPs, or tantalum, with values upwards of 2.2 µF.

Thermal Interface Materials for Electronics

As the name suggests, TIMs are Thermal Interface Materials that the electronic industry typically uses between two mating surfaces. They help to conduct heat from one metal surface to another. TIMs are a great help in thermal management, especially when removing heat from a semiconductor device to a heat sink. By acting as a filler material between the two mating surfaces, TIMs improve the efficiency of the thermal management system.

There are various types of material that can act as TIMs, and there are important factors that designers must consider when selecting a specific material to act as a TIM for a unique application.

Every conductor has its own resistance which impedes the flow of electrical current through it. Impressing a voltage across a conductor starts the free electrons moving inside it. Moving electrons collide against other atomic particles within the conductor, giving rise to friction and thereby generating thermal energy or heat.

In electronic circuits, active devices or processing units like CPUs, TPUs, GPUs, and light-emitting diodes or LEDs generate copious amounts of heat when operating. Other passive devices like resistors and transformers also release high amounts of thermal energy. Increasing amounts of heat in components can lead to thermal runaway, ultimately leading to their failure or destruction.

Therefore, it is desirable to keep electronic components cool when operating, thereby ensuring better performance and reliability. This calls for thermal management to maintain the temperature of the device within its specified limits.

It is possible to use both passive and active cooling techniques for electronic components. It is typical for passive cooling methods to use natural conduction, convection, or radiation techniques for cooling down electronic devices. Active cooling methods, on the other hand, typically require the use of external energy for cooling down components or electronic devices.

Although active cooling can be more effective in comparison to passive cooling, it is more expensive to deploy. Using TIMs is an intermediate method to enhance the efficiency of passive cooling techniques, but without excessive expense.

Although the mating surfaces of the component and its heat sink may appear flat, in reality, they are not. They typically have tool marks and other imperfections such as pits and scratches. The presence of these imperfections prevents the two surfaces from forming close physical contact, leading to air filling the space between the two non-mating surfaces. Air, being a poor conductor of heat, introduces higher thermal resistance between the interfacing surfaces.

TIMs, being a soft material, fills a majority of the gaps between the mating surfaces, expelling the air from between them. In addition, TIMs have better thermal conductivity than air does, typically, 100 times better, and their use considerably improves the thermal management system. As such, many industrial and consumer electronic systems use TIMs widely for ensuring efficient heat dissipation and preventing electronic components from getting too hot.

The electronic industry uses different forms of TIMs. These can be thermal tapes, greases, gels, thermal adhesives, dielectric pads, or PCMs that change their phase. The industry also uses more advanced materials such as pyrolytic graphite, as these are thermally anisotropic.

New MEMS Switches Accelerate Testing

If you are using processor ICs from Advanced Digital, testing them may be costly and logistically challenging. This is because testing these ICs requires isolated DC parametric test equipment, including high-speed digital ATE or automatic testing equipment to assuring the quality. New MEMS switch technology from ADI, working at 34 GHz, offers both DC and high-speed digital testing, despite having a small form factor in the form of a 5x4x0.9 mm LGA package. They reduce the test costs and simplify the logistics necessary for testing RF/digital SoCs or systems on chips.

There are many high-speed chips on the market. These include high-density inter-chip communications for advanced processors. Such advanced processors are the norm for 5G modems, computer graphics systems, and other central processing units. Therefore, ATE designers constantly face the increase in demand and complexity for throughput while assuring quality. For instance, the greatest challenge comes from the increasing number of transmitter/receiver channels, and these require both DC parametric and high-speed digital testing. Not only does this increase the testing time, but it also increases the complexity of the load board, while reducing the test throughput. In turn, this drives up operational expenses, while reducing the productivity of modern ATE environments.

One way of solving such ATE challenges requires a switch that not only operates at DC conditions but also at high frequencies. The new MEMS switch ADGM1001 from ADI, while passing true 0 Hz DC signals, can also operate equally effectively at high-speed signals up to 64 Gbps. Therefore, testing with these new switches requires only one insertion for an efficient single test platform. It is possible to configure the test platform for both DC parameter testing and standards for high-speed digital communications.

High-volume manufacturing requiring HSIO or high-speed input-output testing is often a challenge. Testing strategies typically employ a high-speed test architecture as a common approach for validating HSIO interfaces. Such test equipment typically incorporates two test paths in one configuration—one for DC tests, and the other for high-speed tests.

Testers employ a few methods for performing tests at both DC and high speed on HSIOs or digital SoCs. They may use relays, MEMS switches, or different load boards—one for DC testing, and the other for high-speed testing, but this requires two insertions.

Use of relays for DC and high-speed testing can be challenging. This is primarily due to relays being unable to operate beyond 8 GHz. Therefore, users must compromise on test coverage and signal speed. Moreover, relays take up large areas on PCBs on account of their larger size, and this makes the load boards rather large. Another concern with relays is their limited life and reliability. Relays typically only last for about 10 million cycles, thereby limiting the lifetime and system uptime of the load board.

With its superior density and small form factor, the 34 GHz MEMS switch from ADI offers both DC testing and high-speed digital testing capabilities, overcoming the above challenges.

Batteries and Supercapacitors

In the past, only mission-critical devices had them. Now, a wide range of electronic applications demands backup power solutions. These applications include consumer, commercial, and industrial end-products. Of the several options available, the most energy-dense solution is that offered by supercapacitors, acting as energy reservoirs during interruptions of the main supply. Typically, this occurs during an outage of the mains power, or during swapping out batteries.

Although they are versatile, supercapacitors present challenges in design. This is due to their capacity to provide only 2.7 VDC. Potentially, this means adding multiple supercapacitors, along with the necessary cell-balancing circuitry, and voltage converters for step-up and step-down for supplying regulated power to the power rail operating at 5VDC. The solution is a nuanced and complex circuit, which not only takes up excessive board space but is also relatively expensive.

Comparing them with batteries can explain why supercapacitors offer many technical advantages for compact, low-voltage electronic applications. Supercapacitors help in designing simple, elegant solutions for powering a rail operating at 5VDC using only a single capacitor in combination with a buck/boost reversible voltage converter.

Modern electronic devices often need uninterruptible power as a critical element to provide a satisfactory user experience. The absence of a constant power source can not only stop the electronic product from operating, but it can also lead to vital information loss as well. For instance, a personal computer operating from mains power will lose the information contained in its volatile RAM during a power outage. Similarly, important blood glucose readings in the volatile memory of an insulin pump may be lost while replacing its batteries.

It is possible to prevent this from happening by including a backup battery. Not only will the battery store energy, but it can also release it during the failure of the main source of power. Currently, devices typically use lithium-ion batteries, as these are mature technology, offering very good energy density. This allows relatively compact devices to offer considerable backup power for relatively extended periods.

Irrespective of their base chemistries, batteries offer distinctive problematic characteristics under specific circumstances. Not only are they relatively heavy, but they also take relatively long times to recharge, which may be problematic in areas with frequent power outages. Moreover, it is possible to recharge the cells only a limited number of times, thereby increasing maintenance costs. In addition, batteries often include chemicals that can introduce environmental and safety hazards.

The supercapacitor, or ultracapacitor, offers an alternative solution. Technically, the supercapacitor is a capacitor with an electric double layer. Manufacturers construct supercapacitors using electrochemically stable, symmetric positive and negative carbon electrodes. They separate the electrodes by an ion-permeable separator that is insulating and use a container that they fill with an organic salt/solvent electrolyte.

Supercapacitor manufacturers design the electrolyte to maximize electrode wetting and iconic conductivity. The combination of the minuscule charge separation and high surface area of activated carbon electrodes results in the very high capacitance of the supercapacitor, as compared to the capacitance of regular capacitors.

The reliance on electrostatic mechanisms to store energy makes the electrical performance of supercapacitors more predictable than those of batteries.

Electronically Commuted Motors — Higher Efficiency

Restaurant owners have long been facing operational challenges. These include high energy costs, limited kitchen space, and equipment downtime. For addressing these challenges and improving restaurant productivity, the owners have turned to commercial kitchen equipment. Most of such kitchen equipment has an electric motor at heart, whose performance dramatically impacts how the equipment operates and how it mitigates the above challenges.

It is imperative that owners increase their productivity while reducing their costs, considering their profit margin usually falls between three and five percent. This requires a clear understanding of the connection between the motor and the equipment. Doing so not only reduces the operating costs but also ensures a smoother running operation.

Energy costs happen to be a major concern in the restaurant industry. Commercial kitchen equipment is uncommonly hard on the electricity bill, being typically robust and energy-intensive. According to the US Energy Information Administration, consumption in restaurants is typically three times more per square foot than any other comparative commercial enterprise. This is because restaurants use specialized equipment that has a high power demand, and they operate for extensive hours, thereby consuming huge amounts of energy.

Therefore, purchasing and using high-efficiency, higher energy star-rated restaurant equipment is one of the easiest ways to improve the bottom line. However, as a motor is at the heart of each piece of equipment, it offers a greater choice. In fact, restaurant operators can improve on this further by taking a proactive approach and selecting equipment that has an electronically commuted motor or ECM. They can even consider retrofitting existing equipment with ECMs for a more favorable option.

The reason for the above decision is that an ECM operates more efficiently as compared to what a traditional induction motor does when running restaurant equipment such as ovens, walk-in coolers, mixers, and fryers. Depending on the use cycle, equipment with ECM technology can save more than 30% in annual energy costs. This improves the bottom-line savings and improves the profitability of a restaurant.

A microprocessor and electronic control help to run an ECM. Compared to regular induction motors, this arrangement offers higher electrical efficiency. It also offers the possibility of programming the precise speed of the motor. Moreover, ECMs can maintain high efficiency across a wide range of operational speeds.

Apart from the higher efficiency, ECMs are precise and offer variable speeds, which in fans means an unlimited selection of airflow. A properly maintained airflow during changes in the static air pressure brings important benefits to the restaurant, especially for its hood exhausts and walk-in coolers. The higher efficiency of ECMs leads to reduced heat in the refrigerated space, thereby reducing the equipment runtime.

Forward-thinking original equipment manufacturers are re-engineering their designs and products to include ECMs for delivering smaller and more versatile equipment. Compact motors such as ECMs, are gaining wider recognition and appreciation as they improve the power density of their equipment. Compared to equipment with traditional induction motors, those using ECMs offer the same output, but with a much smaller footprint and lower weight.

Industrial Automation with Single-Pair Ethernet

Efficiency is the fundamental concern for the successful implementation of any factory automation solution. For this, it is necessary to implement control and power components that consume the least possible amount of energy over their lifetime. However, for the actual realization of those savings, it is necessary for proper installation of the system.

This is where the advantages of the SPE or Single Pair Ethernet technology really come across. The technology transfers power and data over the same thin-wire cable. Not only does this save installation costs up-front, but it takes much less to maintain and upgrade the system over time. Phoenix Contact offers their ONEPAIR series for standardized SPE solutions. The ONEPAIR series has two main types of connectors, and they each serve a specific application.

In numerous industries and fields, the IP20 connectors and patch cables enable effective data transmission. This includes building and factory automation, where it is common to achieve a transmission rate of 1 Gbps for a distance of 1000 meters.

The other is the M8 device connectors, rated at IP67. They can transmit power and data safely and quickly from the OT to the IT. This is a new standard in compact connections, which can withstand harsh environments.

SPE or single-power Ethernet is high-performance, parallel transmission of power and data via Ethernet over a single pair of wires. The technology typically carries data and power through PoDL or Power over Data Line starting from the sensor and carrying through right up to the cloud. For barrier-free networking of a wide range of connectors, cables, and components, it is necessary to deploy connectors with standardized pin patterns. For this, Phoenix Contact offers standard connectors, ranging from IP20 to IP6x.

Apart from being ideally suited for a wide range of applications, the SPE is the basis for all Ethernet-based communication. Not only does it enable smart device communication, but it also opens up newer fields of application. SPE has great transmission properties, can span long distances, and optimally supports future-proof network communications. With a trend for miniaturized, resource-conserving devices, SPE offers space-saving cables and electronics.

SPE brings many benefits to its users. It can provide transmission speeds of over 10 Gbps over a single pair of wires. This helps to reduce data cabling while avoiding media breakdowns and device failures, from the field to the cloud. The user has the freedom to establish networking with a consistent structure base of Ethernet, eliminating the need for gateways. With SPE, the cabling is easier and saves time, as the user needs to guide and connect only two wires. They can use the 10Base-TIL standard Ethernet cabling for ranges up to 1000 meters.

The IEEE 802.3 defines the SPE standards. Presently, there are five standards for different transmission speeds and distances. Further standards are under discussion. The IP20 compact male connector series from Phoenix Contact are in accordance with IEC 63171-2 and are ideally suited for building and control cabinet cabling. The M8 or IP67 contacts from Phoenix Contact are in accordance with IEC 63171-5, providing robust and industrial-grade connections.