Category Archives: Guides

Parylene Conformal Coatings for Electronics

Conformal coating on electronic assemblies protects sensitive components and copper tracks on circuit boards from the vagaries of the environment. Typical conformal coatings are epoxy-based, requiring a thick layer to be effective. Parylene conformal coatings, on the other hand, can be ultra-thin and pinhole-free as they are polymer based.

Parylene conformal coatings offer a number of high-value surface treatment properties. These include resistance to moisture and chemical ingress and an effective dielectric barrier. In addition, Parylene also offers excellent thermal conductivity, dry-film lubricity, and UV stability, very essential for electronic subsystems. These properties of Parylene conformal coatings make it an ideal choice for various applications in the fields of consumer electronics, medical electronics, transportation industry, and defense and aerospace industries.

Manufacturers of a unique polymer series use the generic name Parylene for their products. These variations or members of the Parylene family each offer their own, somewhat different properties of the coating. Parylene variants are commercially available, and they are Parylene N, C, D, HT, and ParyFree.

Parylene N or poly(para-xylene) is the basic member of the series. This is a totally linear and highly crystalline material. Being a primary dielectric, Parylene N exhibits a very low dissipation factor and high dielectric strength. It shows an exceptionally low dielectric constant, varying very little with frequency. It also exhibits substantially high crevice-penetrating ability, second only to Parylene HT.

The second commercially available member of the Parylene series is Parylene C, derived from the same raw material as Parylene N. The only difference from Parylene N is the substitution of a chlorine atom for an aromatic hydrogen atom. The useful combination of physical and electrical properties, in addition to very low permeability to corrosive gases and moisture, makes Parylene C useful as a conformal coating.

The third member of the Parylene series is Parylene D, also a derivative of the same raw material that produces Parylene N. The substitution of a chlorine atom for two aromatic hydrogen atoms differentiates Parylene D from Parylene N. Most properties of Parylene D are similar to those of Parylene C. However, Parylene D has the added ability for withstanding slightly higher use temperatures.

The newest addition to the Parylene family is Parylene HT, a commercially available variant. The difference from the other family members is in the replacement of the alpha hydrogen atom of the N dimer with fluorine. Parylene HT can withstand temperatures of 450 ℃, therefore suitable for high-temperature applications. It also has excellent long-term UV stability, a low coefficient of friction, and low dielectric constant. Among the four of the family above, Parylene HT shows the highest penetrating ability.

A unique member of the family series is ParyFree, and this is also the newest. The difference from Parylene N dimer is in the replacement of one or more hydrogen atoms with a non-halogenated substituent. Compared to other commercially available Parylenes, this halogen-free variant offers advanced barrier properties of Parylene C along with substantially improved mechanical and electrical properties. This allows ParyFree to offer robust protection against water, moisture, corrosive solvents, and gasses, as required by select industries.

Are We Ready for 6G?

Apart from simply being an evolution of the 5G technology, 6G is actually a transformation of cellular technology. Just like 4G introduced us to the mobile Internet, and 5G helped to expand cellular communications beyond the customary cell phones, with 6G the world will be taken to newer heights of mobile communications, beyond the traditional devices and applications for cellular communication.

6G devices operate at sub-terahertz or sub-THz frequencies with wide bandwidths. That means 6G opens up the possibility of transfers of massive amounts of information compared to those under use by 4G and even 5G. Therefore, 6G frequencies and bandwidth will provide applications with immersive holograms with VR or Virtual Reality and AR or Augmented Reality.

However, working at sub-THz frequencies means newer research and understanding of material properties, antennas, and semiconductors, along with newer DSP or Digital Signal Processing technologies. Researchers are working with materials like SiGe or Silicon Germanium and InP or Indium Phosphide to develop highly integrated high-power devices. Many commercial entities, universities, and defense industries have been going ahead with research on using these compound semiconductor technologies for years. Their goal is to improve the upper limits of frequency and performance in areas like linearity and noise. It is essential for the industry to understand the system performance before they can commercialize these materials for use in 6G systems.

As the demand increases for higher data rates, the industry moves towards higher frequencies, because of the higher tranches of bandwidth availability. This has been a continuous trend across all generations of cellular technology. For instance, 5G has expanded into bands between 24 and 71 GHz. 6G research is also likely to take the same path. For instance, commercial systems are already using bands from FR2 or Frequency Range 2. The demand for high data rates is at the root of all this trend-setting.

6G devices working at sub-THz frequencies require generating adequate amounts of power for overcoming higher propagation losses and semiconductor limits. Their antenna design must integrate with both the receiver and the transmitter. The receiver design must offer the lowest possible noise figures. The entire available band must have high-fidelity modulation. Digital signal processing must be high-speed to accommodate high data rates in wide bandwidth swathes.

While focussing on the above aspects, it is also necessary to overcome the physical barriers of material properties while reducing noise in the system. This requires the development of newer technologies that not only work at high frequencies, but also provide digitization, test, and measurements at those frequencies. For instance, handling research at sub-THz systems requires wide bandwidth test instruments.

A 6G working system may require characterization of the channel through which its signals propagate. This is because the sub-THz region for 6G has novel frequency bands for effective communications. Such channel-sounding characterization is necessary to create a mathematical model of the radio channel that can encompass intercity reflectors such as buildings, cars, and people. This helps to design the rest of the transceiver technology. It also includes modulation and encoding schemes for forward error correction and overcoming channel variations.

Why are VFDs Popular?

The industrial space witnesses many innovations today. This is possible due to easily affordable and available semiconductors of various types, which makes it easier for manufacturers. One of the most popular innovations is the VFD or variable frequency drive.

Earlier, a prime mover had only a fixed speed, and its use was limited to expensive, non-efficient devices. With the advent of VFDs, it was possible to have an easy, efficient, cost-effective, and low-maintenance method of controlling the speed of the prime mover. This addition to the control of a prime mover not only increases the efficiency of the operation of equipment but also improves automation.

OEMs typically use VFDs for small and mobile equipment. They only need to plug it into a commercial single-phase outlet, in the absence of a three-phase power supply. These can be hose crimpers, mobile pumping units, lifts, fans/blowers, actuator-driven devices, or any other application that uses a motor as the prime mover. Using a VFD to vary the motor’s speed could improve the operation of the equipment. Apart from the benefits of variable speed, OEMs also use VFDs because of their ability to use the single-phase power source to output a three-phase supply to run the motor.

Although the above may not seem much, the value addition is tremendous, especially for the production of small-batch items. As VFDs output three-phase power, they can use standard three-phase induction motors, which are both widely available and cost-effective. VFDs also offer current control. This not only improves motor control but also helps in avoiding inrush currents that are typical when starting induction motors.

For instance, a standard duplex 120V 15A power source can safely operate a 0.75 HP motor without tripping. However, a VFD, when operating from the same power source, can comfortably operate a 1.5 HP motor. In such situations, using a VFD for doubling the prime mover power has obvious benefits for the capacity or functionality of the application.

The above benefits make VFDs an ideal method of controlling motors for small OEM applications. VFD manufacturers also recognize these benefits, and they are adding features to augment them. For instance, they are now adding configurable/additional inputs and outputs, basic logic controls, and integrated motion control programming platforms to VFDs. This is making VFDs an ideal platform for operating equipment and controlling the motor speed, thereby eliminating any requirement for onboard microcontrollers.

However, despite several benefits, VFDs also have some limitations. OEMs typically face problems when using GFCI or ground fault circuit interrupter breakers with VFDs. A GFCI typically monitors current flowing through the ground conductor. Leakage currents through the ground conductor can electrocute users.

A VFD consists of an inverter stage that works on high-frequencies. Harmonics from this stage can create ground currents, also known as common-mode noise. The three-phase waveforms generated by the inverter do not always sum to zero (as is the case in a regular three-phase power source), leading to a difference of potential causing capacitive induced currents. When these currents seek a path to the ground, they can trip a GFCI device. However, this can be minimized by lowering the operating frequency.

Difference Between FPGA and Microcontroller

Field Programmable Grid Arrays or FPGAs do share some similarities with microcontrollers. However, the two are different. While both are integrated circuits, and products and devices use them, there is a distinct difference between the two.

It is possible to program both FPGA and microcontrollers such that they perform specific tasks. However, they are useful in different applications. While FPGA users can program them straight away, it is possible to program microcontrollers only when in a circuit. Another difference between the two is FPGAs are capable of handling multiple parallel inputs, while microcontrollers can read only one line of code at a time.

As FPGAs enable a higher level of customization, they are more expensive and also more difficult to program. On the other hand, microcontrollers, being small and cost-effective, are also easy to customize. It is necessary to know the differences and similarities between the two to make an informed decision about which of them to effectively use for a project.

A microcontroller is typically an integrated circuit that functions like a small computer, constituting a CPU or central processing unit, some amount of random access memory or RAM, and some level of input/output devices. However, unlike a desktop computer, a microcontroller is incapable of running numerous programs. A microcontroller, being a special-purpose device, is capable of executing only one program at a time.

It is possible to make a microcontroller perform a single function repeatedly or at intervals of user request. Typically embedded along with other devices, microcontrollers can be a part of the appliance, no matter the type of product. Moreover, these small computers can operate at very low energy levels—most consume currents in milliamperes, at typically 5 VDC or lower. When produced in large quantities, microcontrollers can be very affordable, although the appliance where the microcontroller is embedded can vary in cost.

On the other hand, an FPGA is a much more complicated device compared to a microcontroller. Most FPGAs come with a pre-programmed chip that allows the users to change the software but not the hardware inside it. By changing the software, users can configure the hardware while using the FPGA. Embedded within a device, an FPGA allows altering the hardware of the device without adding or removing anything physically.

An FPGA is typically an array of integrated circuits, with the arrays arranged in programmable logic blocks. A new FPGA is not configured in any particular function. Users decide the configuration according to their application, and if necessary, users can reconfigure the FPGA as many times as necessary. The FPGA configuration process requires the use of a Hardware Description Language, or HDL, such as Verilog and VHDL.

A modern FPGA features many logic gates and RAM blocks to enable it to execute complex computations. Components in an FPGA may include complete memory blocks in addition to simple flip-flops.

Both FPGAs and microcontrollers serve similar basic functions. Manufacturers develop these items such that users can decide their functionality when designing the application. Both integrated circuits have a similar appearance and are versatile, and users can apply them for various applications.

Efficiency and Performance of Edge Artificial Intelligence

Artificial Intelligence or AI is a very common phrase nowadays. We encounter AI in smart home systems, in intelligent machines we operate, in the cars we drive, or even on the factory floor, where machines learn from their environments and can eventually operate with as little human intervention as possible. However, for the above cases to be successful, it was necessary for computing technology to develop to the extent that the user could decentralize it to the point in the network where the system generates data—typically known as the edge.

Edge artificial intelligence or edge AI makes it possible to process data with low latency and at low power. This is essential, as a huge array of sensors and smart components forming the building blocks of modern intelligent systems can typically generate copious amounts of data.

The above makes it imperative to measure the performance of the edge AI deployment to optimize its advantages. To gauge the performance of the edge AI model requires specific benchmarks that can indicate its performance based on standardized tests. However, there are nuances in edge AI applications, as the application itself often influences the configuration and design of the processor. Such distinctions often prevent using generalized performance parameters.

In contrast with data centers, a multitude of factors constraint the deployment of edge AI. Among them, the primary factors are its physical size and power consumption. For instance, the automotive sector is witnessing a huge increase in electric vehicles with a host of sensors and processors for autonomous driving. Manufacturers are implementing them within the limited capacity of the battery supply of the vehicle. In such cases, power efficiency parameters take precedence.

In another application, such as home automation, the dominant constraint is the physical size of the components. The design of AI chips, therefore, must use these restrictions as guidelines, with the corresponding benchmarks reflecting the adherence to these guidelines.

Apart from power consumption and size constraints, the deployment of the machine learning model will also determine the application of the processor. Therefore, this can impose specific requirements when analyzing its performance. For instance, benchmarks for a chip in a factory utilizing IoT for detecting objects will be different from a chip for speech recognition. Therefore, estimating edge AI performance requires developing specific benchmarking parameters that showcase real-world use cases.

For instance, in a typical modern automotive application, sensors like computer vision, LiDAR, etc., generate the data that the AI model must process. In a single consumer vehicle fitted with an autonomous driving system, this can easily amount to generating two to three terabytes of data per week. The AI model must process this huge amount of data in real-time, and provide outputs like street sign detection, pedestrian detection, vehicle detection, and so on. The volume of data the sensors produce depends on the complexity of the autonomous driving system, and in turn, determines the size and processing power of the AI core. The power consumption of the onboard AI system depends on the quality of the model, and the manner in which it pre-processes the data.

Differences between USB-PD and USB-C

With all the electronic devices we handle every day of our lives, it is a pain to handle an equally large number of cables for charging them and transferring data. So far, a single standard connector to rule all the gadgets has proven to be elusive. A format war opens up, with one faction emerging victorious for a few years, until overtaken by another newer technology. For instance, Betamax overtook VHS, then DVD ousted Betamax, until Blu-ray overtook the DVD, and Blu-ray is now hardly visible with the onslaught of online streaming services.

As suggested by its acronym, the Universal Serial Bus, USB-C has proven to be different and possibly even truly universal. USB-C ports are now a part of almost all manner of devices, from simple Bluetooth speakers to external hard drives to high-end laptops and ubiquitous smartphones. Although all USB-C ports look alike, they do not offer the same capabilities.

The USB-C, being an industry-standard connector, is capable of transmitting both power and data on a single cable. It is broadly accepted by the big players in the industry, and PC manufacturers have readily taken to it.

USB-PD or USB Power Delivery is a specification for allowing the load to program the output voltage of a power supply. Combined with the USB-C connector, USB-PD is a revolutionary concept as devices can transmit both data and power as the adapter adjusts to the power requirements of the device to which it connects.

With USB-PD, it is possible to charge and power multiple devices, such as smartphones and tablets, with each device drawing only the power it requires.

However, USB-C and USB-PD are two different standards. For instance, the USB-C standard is basically a description of the physical connector. Using the USB-C connector does not imply that the adapter has USB-PD capability. Therefore, anyone can choose to use a USB-C connector in their design without conforming to USB-PD. However, with a USB-C connector, the user has the ability to transfer data and moderate power (less than 240 W) over the same cable. In addition, the USB-C connector is symmetrical and self-aligning, which makes it easy to insert and use.

Earlier USB power standards were limited, as they could not provide multiple levels of power for different devices. Using the USB-PD specifications, the device and the power supply can negotiate for optimum power delivery. How does that work?

First, each device starts with an initial power level of up to 10 W at 5 VDC. From this point, power negotiations start. Depending on the needs of the load, the device can transfer power up to 240 W.

In the USB-PD negotiation, there are voltage steps starting at 5 VDC, then at 9 VDC, 15 VDC, and 20 VDC. Beyond this, the device supports power output starting from 0.5 W up to 240 W, by varying the current output.

With USB-PD, it is possible to handle higher power levels at the output, as it allows a device to negotiate the power levels it requires. Therefore, USB power adapters can power more than one device at optimum levels, allowing them to achieve faster charge times.

Importance of Vibration Analysis in Maintenance

For those engaged in maintenance practices, it is necessary to ensure the decision to replace or repair comes much before a complete system failure of key components. Vibration analysis is the easiest way to mitigate this risk.

With vibration analysis, it is possible to detect early signs of machine deterioration or failure. This allows in-time replacement or repair of machinery before any catastrophe or systemically functional failure can occur.

According to Physical laws, all rotating machinery vibrates. As components begin to deteriorate or reach the end of their serviceable life, they begin to vibrate differently, and some may even begin to vibrate more strongly.

This makes analyzing vibration so important while monitoring equipment. Using vibration analysis, it is possible to identify many known modes of failure that are indicators of wear and tear. It is also possible to assess the extent of future damage before it becomes irretrievable and impacts the business or its finances.

Therefore, vibration monitoring and analysis can detect machine problems like process flow issues, electrical issues, loose fasteners, loose mounts, loose bolts, component or machine balances, bent shafts, gear defects, impeller operational issues, bearing health, misalignment, and many more.

In the industry, vibration analysis helps in avoiding serious equipment failure. Modern vibration analysis offers a comprehensive snapshot of the health of a specific machinery. Modern vibration analyzers can display the complete frequency spectrum of the vibration with respect to time for the three axes simultaneously.

However, for interpreting this information properly, the person analyzing the information must understand the basics of the analysis, the failure modes of the machine, and their application.

For this, it is necessary to ensure the gathering of complete information. It is essential to gather a full vibration signature from all three axes, the axial, vertical, and horizontal axes, not only for the driven equipment but also for both ends of the driver motor. It is also necessary to ensure the capability to resolve all indications of failure from the dataset.

Furthermore, it is possible that busy personnel take a read on only one axis. However, this may be problematic, as the problem may be existing in any one of the three axes. Unless testing all three axes, there is a good chance of missing the issue. Comprehensive and careful analysis of the time waveform can predict several concerns.

This also makes it possible and easier to predict issues and carry out beneficial predictive maintenance successfully. In the industry, the importance of reactive maintenance is immense. The industry calls this the run till failure approach. In most cases, they fix the concern after it happens.

To make reactive maintenance as effective as possible in the long run, monitoring, and vibration analysis are essential. The approach helps to ensure the detection of problems at the beginning of failure. That makes fixing the issue cheaper, easier, and faster.

On the other hand, there is a completely opposite approach, that of predictive maintenance. This involves monitoring the machinery while it is operating. The purpose is to predict the parts likely to fail. Vibration analysis is a clear winner here as well.

What is a Reed Relay?

A reed relay is basically a combination of a reed switch and a coil for creating a magnetic field. Users often add a diode for handling any back EMF from the coil, but this is optional. The entire arrangement is very low cost and a simple device to be manufactured.

The most complex construction in the reed relay is the reed switch. As the name suggests, the switch has two reed-shaped metal blades made of a ferromagnetic material. A glass envelope encloses the two blades, holding them in place facing each other, and providing a hermetic seal preventing entry of contaminants. Typically, reed switches have open contacts in a normal state, meaning the two metal blades do not touch when not energized.

The presence of a magnetic field along the axis of the reed switch induces the reeds to magnetize, which attracts them to each other. The reeds, therefore, bend to close the gap. If the applied field is strong enough, the blades bend to touch each other, thereby forming an electrical contact.

The only movement within the reed switch is the bending of the blades. The reed switch has no part that slides past another or pivot points. Therefore, it is safe to say the reed switch has no moving parts that may wear out mechanically. Moreover, an inert gas surrounds the contact area within the hermetically sealed glass tube. For high-voltage switches, a vacuum replaces the inert gas. With the switch area being enclosed against external contaminants, the reed switch has an exceptionally long working life.

The size of a reed switch is a design variable. In longer switches, in comparison with shorter switches, the reeds do not need to deflect much to close a given gap between the blades. To make the reeds in more miniature switches bend more easily, they need to be made of thinner material, and this has an impact on the switch’s current rating. However, small switches allow for more miniature reed relays, which are useful in tighter spaces. On the other hand, larger switches are mechanically more robust, can carry higher currents, and have a greater contact area (lower contact resistance).

A magnetic field, of adequate strength, is necessary to operate a reed relay. It is possible to operate a reed relay by bringing a permanent magnet close to it. However, in the field, a coil surrounding the reed relay typically generates the magnetic field. A control signal forces a current through the coil, which creates the axial magnetic field necessary for closing the reed contacts.

Different models of reed switches need different levels of the magnetic field to make them operate and close the contacts. Manufacturers specify this in ampere-turns or AT, which is the product of current flow and the number of turns in the coil. Therefore, there is a huge variation in the characteristics of the reed relays available. A higher voltage or power level is necessary for stiffer reed relays and those with larger contact gaps. These require higher AT levels to operate, as the coils require more power.

LDOs for Portables and Wearables

As electronic devices get increasingly smaller in form factor, they are also becoming more portable and relying more on battery power. These devices include security systems, fitness trackers, and Internet of Things or IoT devices. The design of such tiny devices demands high-efficiency power regulators that can make use of every milliwatt of power from each charge for extending the working life of the device. The efficiency of traditional linear regulators and switch-mode power regulators falls woefully short of the requirements. Moreover, transient voltages and noise in switch-mode power regulators are detrimental to their performance.

The most recent addition to switching and linear regulators is the LDO or the low-dropout voltage regulator. It lowers thermal dissipation while improving efficiency by operating with a very low voltage drop across the regulator. Low-to-medium power applications are well-served by various types of LDOs, as they are available in minuscule packages of 3 x 3 x 0.6 mm. In addition, there are LDOs with fixed or adjustable output voltages, including some versions with on-off control of the output.

A voltage regulator must maintain a constant output voltage even when the source or load voltages change. Traditional voltage regulator devices operate in one of two ways—linear or switched mode. While LDO regulators are linear regulators, they operate with a very low voltage difference between their output and input terminals. As with other linear voltage regulators, LDOs also function with feedback control.

This feedback control of the LDO functions via a resistive voltage divider that scales the output voltage. The scaled voltage enters an error amplifier that compares it to a reference voltage. The resulting output of the error amplifier drives the series pass element to maintain the output terminal with the desired voltage. The dropout voltage of the LDO is the difference between the input and output voltages, and this appears across the series pass element.

The series pass element of an LDO functions like a resistor whose value varies with the applied voltage from the error amplifier. LDO manufacturers use various devices for the series pass element. It can be a PMOS device, NMOS device, or a PNP bipolar transistor. While it is possible to drive into saturation the PMOS and PNP devices, the dropout voltage for PMOS-type FET devices depends on the drain-to-source on resistance. Although each of these devices has its own advantages and disadvantages, using PMOS devices for the series pass element has the lowest implementation cost. For instance, positive LDO regulators from Diodes Incorporated offer LDOs with PMOS pass devices featuring dropout voltages of about 300 mV, when their output voltage is 3.3 V and the load current is 1 A.

The output of the LDO must have an output capacitor. The inherent ESR or effective series resistance of the capacitor affects the stability of the circuit. That means the capacitor used must have an ESR of 10 ohms or lower for guaranteeing stability covering the entire operating temperature range. Typically, these capacitors are of the type multilayer ceramic, solid-state E-CAPs, or tantalum, with values upwards of 2.2 µF.

Thermal Interface Materials for Electronics

As the name suggests, TIMs are Thermal Interface Materials that the electronic industry typically uses between two mating surfaces. They help to conduct heat from one metal surface to another. TIMs are a great help in thermal management, especially when removing heat from a semiconductor device to a heat sink. By acting as a filler material between the two mating surfaces, TIMs improve the efficiency of the thermal management system.

There are various types of material that can act as TIMs, and there are important factors that designers must consider when selecting a specific material to act as a TIM for a unique application.

Every conductor has its own resistance which impedes the flow of electrical current through it. Impressing a voltage across a conductor starts the free electrons moving inside it. Moving electrons collide against other atomic particles within the conductor, giving rise to friction and thereby generating thermal energy or heat.

In electronic circuits, active devices or processing units like CPUs, TPUs, GPUs, and light-emitting diodes or LEDs generate copious amounts of heat when operating. Other passive devices like resistors and transformers also release high amounts of thermal energy. Increasing amounts of heat in components can lead to thermal runaway, ultimately leading to their failure or destruction.

Therefore, it is desirable to keep electronic components cool when operating, thereby ensuring better performance and reliability. This calls for thermal management to maintain the temperature of the device within its specified limits.

It is possible to use both passive and active cooling techniques for electronic components. It is typical for passive cooling methods to use natural conduction, convection, or radiation techniques for cooling down electronic devices. Active cooling methods, on the other hand, typically require the use of external energy for cooling down components or electronic devices.

Although active cooling can be more effective in comparison to passive cooling, it is more expensive to deploy. Using TIMs is an intermediate method to enhance the efficiency of passive cooling techniques, but without excessive expense.

Although the mating surfaces of the component and its heat sink may appear flat, in reality, they are not. They typically have tool marks and other imperfections such as pits and scratches. The presence of these imperfections prevents the two surfaces from forming close physical contact, leading to air filling the space between the two non-mating surfaces. Air, being a poor conductor of heat, introduces higher thermal resistance between the interfacing surfaces.

TIMs, being a soft material, fills a majority of the gaps between the mating surfaces, expelling the air from between them. In addition, TIMs have better thermal conductivity than air does, typically, 100 times better, and their use considerably improves the thermal management system. As such, many industrial and consumer electronic systems use TIMs widely for ensuring efficient heat dissipation and preventing electronic components from getting too hot.

The electronic industry uses different forms of TIMs. These can be thermal tapes, greases, gels, thermal adhesives, dielectric pads, or PCMs that change their phase. The industry also uses more advanced materials such as pyrolytic graphite, as these are thermally anisotropic.