Monthly Archives: June 2023

MEMS Replacing Quartz?

The automotive market is transforming very fast. We have next-generation technologies already—semi-autonomous cars, ADAS or advanced driver assistance systems, and an array of electric vehicle options—smart mirrors, backup cameras, voice recognition, smartphone integration, telematics, keyless entry, and start. Some of the latest models feature lane-keep assist technology, automated parallel parking, and many other self-driving capabilities as these vehicles move steadily toward fully autonomous driving.

All the above has required a redefinition of the automotive design, including infotainment, convenience, and safety features, as the users of smart and connected cars expect. With automotive being the fastest-growing market segment in the semiconductor field, the key drivers for this growth come from electronic components for ADAS and other EV applications. Consider that an average car has about 1,500 semiconductors to control everything from the drivetrain to the safety systems.

However, apart from sensing, processing, and communication chips, there is another critical technology contributing to the reliable, safe operation of autonomous systems, and that is precision timing.

Most car owners understand automotive timing as the timing that belts, camshafts, or ignition systems keep for the engine to run efficiently and smoothly. For automotive systems developers, however, timing means devices providing the clock for buffers, oscillators, and resonators. In the vehicle, each timing device has a different but essential clocking function that ensures stable, accurate, and reliable frequency control for digital components. This precision timing is especially important for modern complex automotive systems like the ADAS that generate, process, and transmit huge volumes of data.

As a result, modern cars may use up to 70 timing devices to keep the automotive system operating smoothly. As vehicles get smarter with each new model, the number of timing devices is also growing. The automotive design has a wide array of digital systems that require precise, reliable timing references from clock generators and oscillators. They provide the essential timing functions for networks, infotainment, and other subsystems within the vehicle and electronic control system units like ADAS.

With the accelerating pace of automotive innovation, one critical component has remained constant for the past 70 years—the quartz-base timing devices, or the quartz crystal oscillator. But in the automotive environment, quartz crystals face fundamental limitations like fragility due to their susceptibility to environmental and mechanical stresses. Quartz timing devices are now becoming a bottleneck for safety and reliability because of their inherent drawbacks.

MEMS timing components, on the other hand, can easily meet the rigors of AEC-Q100 automotive qualification requirements. MEMS is a well-established technology, widely useful in many fields, including automotive systems. Here, they serve as gyroscopes, accelerometers, and a wide variety of sensor types.

The industry qualification of AEC-Q100 for MEMS devices offers the assurance that these timing components will provide the robustness, reliability, ad performance as the automotive electronic systems demand.

Stringent testing has proven the greater reliability of the silicon-based MEMS technology overclocking applications of quartz crystals. Being much smaller than quartz crystals, MEMS resonators are ideal for space-sensitive automotive applications like radar/LIDAR, smart mirrors, and camera module sensors. Their low mass and smaller size make MEMS timing devices far more resilient to mechanical shock and vibration.

What is PCB Prototyping?

Each electronic piece of equipment has at least one printed circuit board or PCB. The PCB holds electronic components in place, interconnects them appropriately, and allows them to function in the manner that the designer has intended. This allows the equipment to perform according to its specifications.

A designer lays out the printed circuit board carefully following the schematic diagram and other rules before sending it for manufacturing, assembling, and using in the final product. However, it is possible to overlook small mistakes and incorrect connections during the design. Often, only when the PCB is in the final product, is it observed to be not working properly.

Sometimes, things can go wrong during the routing and layout phase. Two of the most common issues are shorting or opens. A short is an unintentional electric connection between two metallic entities, while an open is an unintentional disconnection between two points. A short or an open can prevent the printed circuit board from performing as intended.

To overcome this issue, designers prefer to generate a netlist, preferably in an IPC-356 format, that they send to their PCB manufacturer along with the other Gerber files. The netlist is a database of electrical connections that confirms and maps that the layout in the Gerber files is correct, and will work as intended. The manufacturer loads the netlist along with the Gerber files into the CAM program before verifying the correctness of the design.

The manufacturer can compare the netlist file to the data for finding shorts or opens within the routed file. On discovering an open or short in a PCB, the designer must scrap or redesign the board. If the discovery of the error is at a late stage, the designer has no alternative but to scrap the board. However, if the manufacturer discovers the error before assembly, it is possible to redesign the board rather than scrap it.

Prototyping a board is the process of manufacturing only a few numbers of the board initially. These boards undergo full assembly and then rigorous testing to weed out all errors in them. The testing stage makes a complete list of the errors, and the designer can go back to the design process to rectify the mistakes. Once the designer has addressed all the corrections, the board can proceed with production.

If the errors are of a minor nature, it may not be necessary for the designer to redo the design and layout. The manufacturer can suggest simple tweaks and the PCB engineers can accept them through an approval process. Manufacturers can easily handle a trace cut or add a thermal connection or a clearance that they can easily and cleanly complete.

Allowing the manufacturer to handle the required changes versus a complete revision by the designer is much more cost-effective, and faster. During the prototyping process, it is sufficient to document the process. Later, an ECN can fix the data set, create a completely new version or bump the revision as necessary. This process is inexpensive and accurate.

Efficiency and Performance of Edge Artificial Intelligence

Artificial Intelligence or AI is a very common phrase nowadays. We encounter AI in smart home systems, in intelligent machines we operate, in the cars we drive, or even on the factory floor, where machines learn from their environments and can eventually operate with as little human intervention as possible. However, for the above cases to be successful, it was necessary for computing technology to develop to the extent that the user could decentralize it to the point in the network where the system generates data—typically known as the edge.

Edge artificial intelligence or edge AI makes it possible to process data with low latency and at low power. This is essential, as a huge array of sensors and smart components forming the building blocks of modern intelligent systems can typically generate copious amounts of data.

The above makes it imperative to measure the performance of the edge AI deployment to optimize its advantages. To gauge the performance of the edge AI model requires specific benchmarks that can indicate its performance based on standardized tests. However, there are nuances in edge AI applications, as the application itself often influences the configuration and design of the processor. Such distinctions often prevent using generalized performance parameters.

In contrast with data centers, a multitude of factors constraint the deployment of edge AI. Among them, the primary factors are its physical size and power consumption. For instance, the automotive sector is witnessing a huge increase in electric vehicles with a host of sensors and processors for autonomous driving. Manufacturers are implementing them within the limited capacity of the battery supply of the vehicle. In such cases, power efficiency parameters take precedence.

In another application, such as home automation, the dominant constraint is the physical size of the components. The design of AI chips, therefore, must use these restrictions as guidelines, with the corresponding benchmarks reflecting the adherence to these guidelines.

Apart from power consumption and size constraints, the deployment of the machine learning model will also determine the application of the processor. Therefore, this can impose specific requirements when analyzing its performance. For instance, benchmarks for a chip in a factory utilizing IoT for detecting objects will be different from a chip for speech recognition. Therefore, estimating edge AI performance requires developing specific benchmarking parameters that showcase real-world use cases.

For instance, in a typical modern automotive application, sensors like computer vision, LiDAR, etc., generate the data that the AI model must process. In a single consumer vehicle fitted with an autonomous driving system, this can easily amount to generating two to three terabytes of data per week. The AI model must process this huge amount of data in real-time, and provide outputs like street sign detection, pedestrian detection, vehicle detection, and so on. The volume of data the sensors produce depends on the complexity of the autonomous driving system, and in turn, determines the size and processing power of the AI core. The power consumption of the onboard AI system depends on the quality of the model, and the manner in which it pre-processes the data.

Cooling Machine Vision with Peltier Solutions

The industry is using machine vision for replacing manual examination, assessment, and human decision-making. For this, they are using video hardware supplemented with software systems. The technology is highly effective for inspection, quality control, wire bonding, robotics, and down-the-hole applications. Machine vision systems obtain their information by analyzing the images of specific processes or activities.

Apart from inspection systems, the industry also uses machine vision for the sophisticated detection of objects and for recognizing them. Machine vision is irreplaceable in collision avoidance systems that the next generation of autonomous vehicles, robotics, and drones are using. Recently, scientists are using machine vision in many machine learning and artificial intelligence systems, such as facial recognition.

However, for all the above to be successful, the first requirement is the machine vision must be capable of capturing images of high quality. For this, machine vision systems employ image sensors and cameras that are temperature sensitive. They require active cooling for delivering optimal image resolutions that are independent of the operating environment.

Typically, machine vision applications make use of two types of sensors—CCD or charge-coupled devices, and CMOS or complementary metal-oxide semiconductor sensors. For both, the basic functionality is to convert photons to electrons that are necessary for digital processing. Both types of sensors are sensitive to temperature, as thermal noise affects their image resolution, and thermal noise increases with the rising temperature of the sensor assembly. This depends on environmental conditions or the heat generated by the surrounding electronics, which can raise the temperature of the sensor beyond its maximum operating specification.

By rough estimation, the dark current of a sensor doubles for every 6 °C rise in temperature. By dropping the temperature by 20 °C, it is possible to reduce the noise floor by 10 dB, effectively improving the dynamic range by the same figure. When operating outdoors, the effect is more pronounced, as the temperature can easily exceed 40 °C. Solid-state Peltier coolers can prevent image quality deterioration, by reducing and maintaining the temperature of the sensor to below its maximum operating temperature, thereby helping to obtain high image resolution.

However, it is a challenge to spot cool CCD and CMOS sensors in machine vision system applications. Adding a Peltier cooling device increases the size, cost, and weight. It also adds to the complexity of the imaging system. Cooling of imaging sensors can lead to condensation on surfaces exposed to temperatures below the dew point. That is why vision systems are mainly contained within a vacuum environment that has insulated surfaces on the exterior. This prevents the build-up of condensation over time.

The temperature in the 50-60 °C range primarily affects the image quality of CCD and CMOS sensors. However, this depends on the quality of the sensor as well. For sensors in indoor applications just above ambient, a free convection heat sink with good airflow may be adequate to cool a CMOS sensor. However, this passive thermal solution may not suffice for outdoor applications. Active cooling with a Peltier cooling solution is the only option here.

Differences between USB-PD and USB-C

With all the electronic devices we handle every day of our lives, it is a pain to handle an equally large number of cables for charging them and transferring data. So far, a single standard connector to rule all the gadgets has proven to be elusive. A format war opens up, with one faction emerging victorious for a few years, until overtaken by another newer technology. For instance, Betamax overtook VHS, then DVD ousted Betamax, until Blu-ray overtook the DVD, and Blu-ray is now hardly visible with the onslaught of online streaming services.

As suggested by its acronym, the Universal Serial Bus, USB-C has proven to be different and possibly even truly universal. USB-C ports are now a part of almost all manner of devices, from simple Bluetooth speakers to external hard drives to high-end laptops and ubiquitous smartphones. Although all USB-C ports look alike, they do not offer the same capabilities.

The USB-C, being an industry-standard connector, is capable of transmitting both power and data on a single cable. It is broadly accepted by the big players in the industry, and PC manufacturers have readily taken to it.

USB-PD or USB Power Delivery is a specification for allowing the load to program the output voltage of a power supply. Combined with the USB-C connector, USB-PD is a revolutionary concept as devices can transmit both data and power as the adapter adjusts to the power requirements of the device to which it connects.

With USB-PD, it is possible to charge and power multiple devices, such as smartphones and tablets, with each device drawing only the power it requires.

However, USB-C and USB-PD are two different standards. For instance, the USB-C standard is basically a description of the physical connector. Using the USB-C connector does not imply that the adapter has USB-PD capability. Therefore, anyone can choose to use a USB-C connector in their design without conforming to USB-PD. However, with a USB-C connector, the user has the ability to transfer data and moderate power (less than 240 W) over the same cable. In addition, the USB-C connector is symmetrical and self-aligning, which makes it easy to insert and use.

Earlier USB power standards were limited, as they could not provide multiple levels of power for different devices. Using the USB-PD specifications, the device and the power supply can negotiate for optimum power delivery. How does that work?

First, each device starts with an initial power level of up to 10 W at 5 VDC. From this point, power negotiations start. Depending on the needs of the load, the device can transfer power up to 240 W.

In the USB-PD negotiation, there are voltage steps starting at 5 VDC, then at 9 VDC, 15 VDC, and 20 VDC. Beyond this, the device supports power output starting from 0.5 W up to 240 W, by varying the current output.

With USB-PD, it is possible to handle higher power levels at the output, as it allows a device to negotiate the power levels it requires. Therefore, USB power adapters can power more than one device at optimum levels, allowing them to achieve faster charge times.

Importance of Vibration Analysis in Maintenance

For those engaged in maintenance practices, it is necessary to ensure the decision to replace or repair comes much before a complete system failure of key components. Vibration analysis is the easiest way to mitigate this risk.

With vibration analysis, it is possible to detect early signs of machine deterioration or failure. This allows in-time replacement or repair of machinery before any catastrophe or systemically functional failure can occur.

According to Physical laws, all rotating machinery vibrates. As components begin to deteriorate or reach the end of their serviceable life, they begin to vibrate differently, and some may even begin to vibrate more strongly.

This makes analyzing vibration so important while monitoring equipment. Using vibration analysis, it is possible to identify many known modes of failure that are indicators of wear and tear. It is also possible to assess the extent of future damage before it becomes irretrievable and impacts the business or its finances.

Therefore, vibration monitoring and analysis can detect machine problems like process flow issues, electrical issues, loose fasteners, loose mounts, loose bolts, component or machine balances, bent shafts, gear defects, impeller operational issues, bearing health, misalignment, and many more.

In the industry, vibration analysis helps in avoiding serious equipment failure. Modern vibration analysis offers a comprehensive snapshot of the health of a specific machinery. Modern vibration analyzers can display the complete frequency spectrum of the vibration with respect to time for the three axes simultaneously.

However, for interpreting this information properly, the person analyzing the information must understand the basics of the analysis, the failure modes of the machine, and their application.

For this, it is necessary to ensure the gathering of complete information. It is essential to gather a full vibration signature from all three axes, the axial, vertical, and horizontal axes, not only for the driven equipment but also for both ends of the driver motor. It is also necessary to ensure the capability to resolve all indications of failure from the dataset.

Furthermore, it is possible that busy personnel take a read on only one axis. However, this may be problematic, as the problem may be existing in any one of the three axes. Unless testing all three axes, there is a good chance of missing the issue. Comprehensive and careful analysis of the time waveform can predict several concerns.

This also makes it possible and easier to predict issues and carry out beneficial predictive maintenance successfully. In the industry, the importance of reactive maintenance is immense. The industry calls this the run till failure approach. In most cases, they fix the concern after it happens.

To make reactive maintenance as effective as possible in the long run, monitoring, and vibration analysis are essential. The approach helps to ensure the detection of problems at the beginning of failure. That makes fixing the issue cheaper, easier, and faster.

On the other hand, there is a completely opposite approach, that of predictive maintenance. This involves monitoring the machinery while it is operating. The purpose is to predict the parts likely to fail. Vibration analysis is a clear winner here as well.

What is a Reed Relay?

A reed relay is basically a combination of a reed switch and a coil for creating a magnetic field. Users often add a diode for handling any back EMF from the coil, but this is optional. The entire arrangement is very low cost and a simple device to be manufactured.

The most complex construction in the reed relay is the reed switch. As the name suggests, the switch has two reed-shaped metal blades made of a ferromagnetic material. A glass envelope encloses the two blades, holding them in place facing each other, and providing a hermetic seal preventing entry of contaminants. Typically, reed switches have open contacts in a normal state, meaning the two metal blades do not touch when not energized.

The presence of a magnetic field along the axis of the reed switch induces the reeds to magnetize, which attracts them to each other. The reeds, therefore, bend to close the gap. If the applied field is strong enough, the blades bend to touch each other, thereby forming an electrical contact.

The only movement within the reed switch is the bending of the blades. The reed switch has no part that slides past another or pivot points. Therefore, it is safe to say the reed switch has no moving parts that may wear out mechanically. Moreover, an inert gas surrounds the contact area within the hermetically sealed glass tube. For high-voltage switches, a vacuum replaces the inert gas. With the switch area being enclosed against external contaminants, the reed switch has an exceptionally long working life.

The size of a reed switch is a design variable. In longer switches, in comparison with shorter switches, the reeds do not need to deflect much to close a given gap between the blades. To make the reeds in more miniature switches bend more easily, they need to be made of thinner material, and this has an impact on the switch’s current rating. However, small switches allow for more miniature reed relays, which are useful in tighter spaces. On the other hand, larger switches are mechanically more robust, can carry higher currents, and have a greater contact area (lower contact resistance).

A magnetic field, of adequate strength, is necessary to operate a reed relay. It is possible to operate a reed relay by bringing a permanent magnet close to it. However, in the field, a coil surrounding the reed relay typically generates the magnetic field. A control signal forces a current through the coil, which creates the axial magnetic field necessary for closing the reed contacts.

Different models of reed switches need different levels of the magnetic field to make them operate and close the contacts. Manufacturers specify this in ampere-turns or AT, which is the product of current flow and the number of turns in the coil. Therefore, there is a huge variation in the characteristics of the reed relays available. A higher voltage or power level is necessary for stiffer reed relays and those with larger contact gaps. These require higher AT levels to operate, as the coils require more power.

Astronomical Growth of Machine Vision

Industries are witnessing rapid growth of machine vision. This technology being a vital component of the industry’s modern automation solutions, they expect the market for 3-D machine vision to nearly double in the next six years. In the manufacturing context, two major factors contribute to this increase in adoption of the machine vision technology. The first is due to the industry facing acute labor shortage problems, and the second is the dramatic decrease in hardware costs.

Additionally, with an increase in technological performance, the industry needs machine vision systems to process ever-expanding amounts of information every second. Moreover, with the advent of machine learning and advanced artificial intelligence algorithms, data collected from machine vision systems are becoming more valuable. The industry is rightly realizing the power of machine vision.

So, what exactly is machine vision? What makes a robot see? A vision system typically is a conglomeration of many parts that include the camera, lighting sources, lenses, robotic components, a computer for processing, and application-specific software.

The camera forms the eye of the system. There are many types of cameras that the industry uses for machine vision. Each type of camera is specific for a particular application need. Also, an automation solution may have many cameras with different configurations.

For instance, a static camera typically remains in a fixed position in a scenario where speed is imperative. It might have a bird’s eye view of the scene below it. On the other hand, a robotic arm may mount a dynamic camera at its end, to take a closer look at a process, thereby picking out higher details.

One of the important aspects of the vision system is its computing power. In fact, this is the brain to help the eye understand what it is seeing. Traditional machine vision systems were rather limited in their computing powers. Modern machine vision systems that take advantage of machine learning algorithms require far greater computation resources. They also depend on software libraries for augmenting their computing capabilities.

Machine vision manufacturers design these capabilities specifically for application users. They design the software to provide advanced capabilities for machine vision systems. These advanced capabilities allow users to control the tasks for the machine vision, such that they can gain valuable insights from the visual feedback.

With the industry increasingly using vision for assembly lines, the concept of a vision-guided system replacing basic human capabilities is on the upswing in a wide range of processes and applications.

One of the major applications of machine vision is inspection. As components enter the assembly line, machine vision cameras give them a thorough inspection. They look for cracks, bends, shifts, misalignment, and similar defects, which, even if minor, may lead to a quality issue later. The machine vision compares the crack, and if larger than a specified size, rejects the component automatically.

In addition to mechanical defects, machine vision is capable of detecting color variations. For instance, a color camera can detect discoloration and thereby reject faulty units.

The camera can also read product labels, serial numbers, or barcodes. This allows the identification of specific units that need tracking.

Condition Based Monitoring and MEMS Sensors

Lately, there has been a tremendous improvement in MEMS accelerometer performance. So much so, it can now compete with piezo vibration sensors that are all-pervasive. This is because MEMS sensors offer several advantages including smaller size, lower power consumption, low noise levels, wider bandwidth, and a higher level of integration. Consequently, the industry is now increasingly using MEMS sensors in CbM or condition-based monitoring for facility and maintenance. Engineers find CbM very useful, as it helps in detecting, diagnosing, predicting, and ultimately, avoiding faults in their machines.

The smaller size and ultra-low power consumption of MEMS accelerometers allow for replacing wired piezo sensors which are typically bulky, with wireless solutions. Moreover, it is easy to replace bulky single-axis piezo sensors with small, light, and triaxial MEMS accelerometers. The industry finds such replacements cost-effective for continuously monitoring various machines.

The world over, millions of electric motors are in continuous operation. They account for about 45% of global electricity usage. In a survey across industries, more than 80% of the companies in the survey claimed to experience unplanned maintenance. More than 70% of the companies remain unaware that their assets are due for upgrade and maintenance. With Industry 4.0, or the IoT, the industry is moving towards digitization to improve its productivity and efficiency.

The trend is more toward wireless sensor systems. An estimate finds there will be about 5 billion wireless modules in smart manufacturing by 2030. Although most critical assets require a wired CbM system, there are many, many more that will benefit from wireless CbM solutions.

For the best performance, speed, reliability, and security, it is difficult to surpass a wired CbM system. For these reasons, greenfield sites still deploy them. However, installing wired CbM systems requires routing cables across factory floors. This may be difficult in cases where it is not possible to disturb certain machinery. Industrial wired sensor networks typically use 60 m or 200 ft of cables, which can be substantially expensive depending on the material and labor the process involves. Some deployments may also require wire harnesses and routing through existing infrastructure, thereby increasing the cost, complexity, and time to install.

On the other hand, brownfield sites may not be amenable to wired solution installations. For them, although the wireless systems may initially appear to be more expensive, other factors can lead to significant cost savings. For instance, initial cost savings can come from less cabling, fewer maintenance routes, and lower hardware requirements. Over the lifetime of the wireless CbM installation, substantial cost savings can accrue from the ease of scalability and more effortless maintenance routines.

Wireless installations depend on batteries for powering them. Depending on the level of reporting, batteries may last several years. Deployment of wireless systems based on energy-harvesting techniques can make maintenance of these systems even easier and less expensive. However, once a company decides to go wireless, they must focus on the best technology for CbM that suits their application, of which, there are quite a few to choose from, such as Bluetooth Low Energy, GlowPAN, and Zigbee.