Monthly Archives: October 2023

Miniature Temperature Sensors

During the COVID19 pandemic, it became necessary to use quick and non-invasive techniques for assessing body temperature. Various locations that included airports, hospitals, schools, and shopping centers used non-contact thermometry. This essentially employs an infrared sensor for measuring the surface temperature but without any physical contact. Not only was this technique very popular, but it is now a typical way of taking body temperature. While providing quick and reliable readings, infrared thermometers are also non-invasive.

The accuracy of infrared thermometers largely depends on variables such as the nature of the surface it is measuring and its surroundings. However, Melexis Microelectronic Integrated Systems has now successfully resolved these problems. They have developed a miniature infrared temperature sensor that offers medical-grade accuracy and temperature compensation.

Melexis specializes in and offers several microelectronic ICs and sensors for various applications. Their sensors are applicable to consumer, automotive, digital health, energy management, and smart device industries. Samsung has deployed one of the Melexis products in their GWS smartwatch series. This is the medical-grade version of the MLX90632 temperature sensor, which operates on FIR or far-infrared technology. The enhanced accuracy of the MLX90632 temperature sensor along with its non-contact temperature measurement technique allows its use for menstrual-cycle tracking. A wide range of new and possible applications in health, sports, and other domains is now possible because of the reliable continuous temperature measuring capabilities of the sensor.

The MLX90632 FIR temperature sensor is an SMD or surface mount device that measures the infrared radiation from the object for reporting the temperature. As the sensor has a tiny SMD packaging, it is suitable for use in a variety of applications, especially in wearables, hearables or in-ear devices, and point-of-care clinical applications. All these applications require high accuracy for measuring the human body temperature.

In comparison to traditional contact methods of measuring temperature, the non-contact temperature measurement methods offer advantages, primarily as they enable sensing and measuring the temperature without directly touching the measured surface or object. This is helpful in specific circumstances where it is undesirable to make physical contact with the object, especially when the object may be fragile, under movement, or located in a hazardous area. When a quick response is necessary, or there is no guarantee of good thermal contact between the object and sensor, a non-contact temperature measurement technique is more accurate. It can also yield better and more reliable results as compared to what the contact temperature measurement techniques can.

The MLX90632 sensor is a minuscule device in a chip size of 3 x 3 x 1 mm QFN package. Within this tiny space, it incorporates the sensor element, the signal processing circuitry, digital interfacing circuitry, and optics. The small size enables quick and easy integration within a huge range of modern applications, typically with limited space.

Melexis calibrates its sensors in-house, thereby ensuring high accuracy. They compensate for harsh external thermal conditions with internal precautions for electrical and thermal operations. After amplifying and digitizing the voltage signal from the thermopile sensing element, the IC filters it digitally and stores the raw measurement data in its RAM. This is accessible via an I2C interface.

MCUs Working Sans Batteries

Nature is exceptionally efficient. It maximizes available and additional resources by using as much of it as possible. Humans are now beginning to follow in nature’s footsteps. Doing this allows us to improve performance, thereby reducing waste and minimizing cost. One of the methods in use today is energy harvesting. We can power electrical devices using ambient energy. For devices operating on batteries, it is possible to use energy harvesting for extending the useful life of the battery, or even replace the energy contribution of the battery entirely.

We have ultra-low-power microcontroller units or ULP MCUs as the logical choice for demonstrating energy harvesting. Many devices like wireless sensors, wearable technology, and edge applications use ULP MCUs because it is essential for these devices to extend their battery lives. Reviewing the working practice of energy harvesting is important to understand its value to ULP MCUs.

The principles of energy harvesting are simple. It must overcome the finite nature of the primary source of energy, here, the battery. However, as no process can be one hundred percent efficient, there will be losses when converting the source power to usable energy, even when there is boundless ambient energy available for capture. This is evident in wind turbines, a renewable large-scale energy source. The wind provides the turbines with potential energy, making the blades rotate. This movement turns a generator, producing electrical power. Other similar large-scale ambient energy sources also exist—geothermal heat, oceanic waves, and solar.

Wearables and other similar small-scale devices harvest thermal, kinetic, or environmental electromagnetic radiation energy. However, each of these makes use of different mechanisms for converting the source power to useful usable energy. It is necessary to consider the utility and practicality of each conversion mechanism, as the application defines the size and mass of the energy conversion technology.

For instance, making use of thermal radiation is more suitable for wireless sensor applications, as the sensor placement and design can take advantage of both forms of energy. Likewise, vehicles can use sensors that make use of radiant heat emanating from the road surface. As engine components like wheels are high-vibration locations, it is possible to harvest motion energy from near them. For wearables using ULP MCUs, harvesting the kinetic energy from the human user’s motion provides the most practical means of conversion to usable energy.

In wearable technology, the primary application of the ULP MCU is to process the edge data gathered by the sensor. And, it is critical to process this data with the minimum power consumption. Energy harvesting supplements the power from the battery, which has a finite amount of energy, and requires periodic replenishment in the form of recharging or replacement as its power depletes. There are three ways of capturing energy for ULP MCUs—using piezoelectric, electromagnetic, or triboelectric generators.

Kinetic forces compressing piezoelectric materials can make it generate an electric field, which can add as much as 10 mW to the battery. Harvesting energy from electromagnetic radiation like infrared, radio, UV, and microwaves can contribute about 0.3 mW of harvested power. Triboelectric generators use friction on dissimilar material surfaces rubbing together from mechanical movements like oscillation, vibration, and rotary motions to generate 1-1.5 mW of electricity.

Improving Power Management Efficiency

Design engineering teams face considerable challenges handling conflicting requirements for portable types of medical devices. Most of these devices are always-on types and must be capable of managing battery life with maximum efficiency and effectiveness. They also must have suitable dimensions tailored to the patient’s comfort, especially as most of these are meant to be worn 24 hours a day. Therefore, not only must their construction be robust, but they must also deliver the highest levels of performance. Designers use PMICs or power management integrated circuits for optimizing the power utilized by the ultra-low architecture, improving the sensitivity of measurements, and keeping the SNR or signal-to-noise figures on the high side.

Wearable technology is benefitting from the growing popularity of mobile networks from the perspective of both, the healthcare and the consumers. Although their design was initially meant for sports and wellness, the medical market is now finding increasing use for wearables. As a consequence, newer generations of medical wearable devices are available using MEMS or micro-electromechanical system sensors, including heart-rate monitors, gyroscopes, and accelerometers. Other sensors are also in use, such as for determining skin conduction and pulse variability. However, the more sensitive the sensor, the more it faces SNR issues so designers need to use better noise reduction techniques along with more efficient energy-saving solutions.

For instance, the accuracy of optical instruments depends on many biological factors. Therefore, design engineers maximize the sensitivity of optical instruments by improving their SNR over a wide range. They use voltage regulator ICs with low quiescent currents along with elements that improve the SNR by reducing ripple and settling times.

Maxim offers a complete SCODAS or single-channel optical data acquisition system, the MAXM86161. They have designed its sensor module for use in in-ear and mobile applications. They have optimized it for SPO2 or oxygen saturation in the blood, HR or reflective heart rate, and continuous monitoring of HRV or heart rate variability. There are three high-current programmable LED drivers on the transmitter part of the MAXM86161. While the receiver part has a highly efficient PIN photo-diode along with an optical readout channel. It features a low-noise signal conditioning ALE or analog front end. It includes a 19-bit ADC or analog to digital converter, a high-performance ALC or ambient light cancellation circuit, and a picket fence type detect-and-replace algorithm.

Optimizing the energy efficiency of an optical measuring instrument is a constraint on its design. Rather than use regular LDO or low-drop-out regulators, designers now use novel switching configurations to improve the efficiency further. The requirement is that the voltage regulation element provides a low level of ripples at high frequencies so that there is no interference when measuring heart rates. To operate LEDs at voltages different from what the Li-ion batteries can supply, designers use new buck-boost converter technologies, thereby curbing energy consumption and saving board space. For instance, they use the SIMO or single-inductor multiple-output buck-boost architecture for reducing the number of inductors and ICs the circuit requires.

The MAXM86161 from Maxim Integrated is a PMIC or power-management integrated circuit and is meant for applications that are space-constrained and battery-powered, where the efficiency must be high within a small space.

Modular Machine Vision

As the AI or Artificial Intelligence scenario changes, in most cases, too fast, industrial vision systems must follow suit. These involve the automated quality inspection systems of today and the autonomous robots of the future.

Whether you are an OEM or Original Equipment Manufacturer, a systems integrator, or a factory operator, trying to get the maximum performance out of a machine vision system requires future-proofing your platform. This is necessary so that you can overcome the anxiety of having launched a design only months or weeks before the introduction of the next game-changing architecture or AI algorithm.

Traditionally, the industrial machine vision system is made up of an optical sensor like a camera, lighting for illuminating the area to be captured, a controller or a host PC, and a frame grabber. In this chain, the frame grabber is of particular interest. This device captures still images at a higher resolution than the camera can. High-resolution images simplify the analysis, whether by computer vision algorithms or by AI or artificial intelligence.

The optical sensor or camera connects directly to the frame grabber over specific interfaces. The frame grabber is typically a slot card plugged into the vision platform or PC. It communicates with the host over a PCI Express bus.

Apart from its ability to capture high-resolution images, the frame grabber also has the ability to trigger and synchronize multiple cameras simultaneously. It can also perform local image processing, including color corrections, as soon as it has captured a still shot. While eliminating latency, it also eliminates the cost of transmitting images to the cloud for preprocessing, while freeing the host processor for running inferencing algorithms, executing corresponding control functions, and other tasks like turning off lights and conveyor belts.

Although the above architecture makes the arrangement more complex than some newer types that integrate various subsystems in the chain, it is much more scalable. It also provides a higher degree of flexibility, as the amount of image-processing performance achieved is limited only by the number of slots available in the host PC.

However, machine vision systems relying on high-resolution image sensors and multiple cameras can face a problem with system bandwidth. For instance, a 4MP camera needs a throughput of about 24 Mbps. PCIe 3.0 interconnects offer roughly 1 Gbps per lane data rate.

On the other hand, Gen4 PCIe interfaces double this bandwidth to almost 2 Gbps per lane. Therefore, you can connect twice as many video channels on your platform without making any other sacrifices.

However, multiple camera systems ingesting multiple streams can consume bandwidth rather quickly. Suppose you are adding one or more FPGA acceleration or GPU cards for higher accuracy, and low latency AI or executing computer vision algorithms. In that case, you have a potential bandwidth bottleneck on your hands.

Therefore, many industrial machine vision integrators make tradeoffs. They may add more host CPU to accommodate the shortage of bandwidth, use a backplane-based system to make the accelerating cards play a bigger role, or change over to a host PC with integrated accelerators. Regardless, the arrangement adds significant cost and increases power consumption and thermal dissipation. Modularizing your system architecture can safeguard against this.

Silicon-Based MEMS Micro Speakers

For the past 100 years or so, the audio industry has been using coil-based driver technology for its loudspeakers. Although the technology has several disadvantages, it has dominated the landscape for so long simply because of the absence of a suitable alternative cost-effective technology. Now, this is likely to change, at least for the next generation of earbuds using micro speakers. A new startup, the CA-based firm xMEMS, has been perfecting its MEMS driver.

The company has created three MEMS or micro-electro-mechanical-systems micro speakers, suitable for use in hearing aids, wired and wireless earbuds, smart glasses, loudspeaker tweeter arrays, and virtual reality headsets.

xMEMS is promising a long list of advantages for its solid-state micro speakers. For starters, the driver is only about 1 mm (1/25th of an inch) thick. That leaves more room for sensors, batteries, and other components. The entire speaker is made of silicon, including the actuator and membrane. This eliminates the need for matching the driver and calibrating it. Being entirely solid-state, the MEMS technology allows mass production of the high-resolution capable micro speaker in more precise configurations than is possible with traditional designs. It does not involve the tedious manual assembly of balanced-armature drivers, as in regular coil-based speakers.

The solid-state micro speakers boast a flat frequency response for the full audio spectrum ranging from 20 Hz to 20 kHz. While there are no in-band resonances, these speakers exhibit an astonishing ±1° phase consistency for spatial performance. As the MEMS speakers show a superior high-frequency response in comparison to coil speakers, their clarity and presence are outstanding. The high-speed mechanical response results in a group delay of less than 50µs, while their total harmonic distortion is only about 0.5% or 94dB at 1kHz. The near-zero phase delay results in improved noise suppression.

In addition to the superior performance characteristics above, the new MEMS speakers can withstand mechanical shock to a greater extent than their coil-based counterparts can. This is due to their monolithic design, eliminating the spring and suspense structure of coil-based speakers. Being totally solid-state, the new speakers consume far less power for the same output, thereby improving battery life. No added membrane is necessary for resistance to dust and moisture up to IP58.

In a blog, xMEMS claimed their MEMS speakers are suitable for high-resolution audio. Although for high-resolution audio, the focus is more on the codec’s ability to achieve suitable bit depth and sampling rates, requirements from the speaker are just as stringent.

Typically, the digital signal chain and the codec are responsible for the highest quality of data stream. Since the speaker is the ultimate transducer for the sound that people hear, it must accurately render and reproduce the sound as the artist intended.

In this respect, the performance of solid-state MEMS micro speakers suits the standard requirements significantly better than coil-based speakers can. The MEMS speaker’s extended bandwidth and its mechanical and ultrasonic near-flat response above 20 kHz are responsible for that.

The MEMS driver works on the principles of inverse piezoelectric effect. The application of voltage causes the actuator to contract and expand, converting electrical energy into mechanical sound energy.

Future Factories with 5G

The world is moving fast. If you are a manufacturer still using Industry 3.0 today, you must move your shop floor forward to Industry 4.0 for being relevant tomorrow, and plan for Industry 5.0, for being around next week. 5G may be the answer to how you should make the changes to move forward.

There has been a sea of changes in technology, for instance, manufacturing uses edge computing now, and the advent of the Internet of Things has led to the evolution.

At present, we are in the digital transformation era, or Industry 4.0. People call it by different names like intelligent industry, factory of the future, or smart factory. These terms indicate that we are using a data-oriented approach. However, it is also necessary to collaborate with the manufacturing foundation. This approach is the Golden Triangle, based on three main systems—PLM or Product Lifecycle Management, MES or Manufacturing Execution Systems, and ERP or Enterprise Resource Planning.

With IoT, there is an impact on the manufacturing process, depending on the data collected in real-time, and its analytics. Of course, it complements existing systems that are more oriented to the process. Therefore, rather than replace, IoT actually complements and collaborates with the existing systems that help the manufacturer to manage the shop floor.

IoT is one of the major driving factors behind the movement that we know as Industry 4.0. One of its key points is to enable massive automation. This requires data collection from the shop floor and moving it to the cloud. On the other end, it will need advanced analytics. This is necessary to optimize the workflow and processes that the manufacturer uses. After the lean strategy, there will be a kind of lean software, acting as one more step towards process optimization within the company and on the shop floor.

However, manufacturers will face several challenges as they grow and scale up their IoT initiatives. These will include automation, flexibility, and sustainability. Of these, automation is already the key topic in the market—the integration of technologies to automate the various manufacturing processes.

The next in line is flexibility. For instance, if you are manufacturing a product in a line, it takes a long time to change that line for making another product.

The last challenge is rather vast. Sustainability means making manufacturing cost-effective by improving the processes and the efficiency of the equipment. It may be necessary to minimize energy consumption, and decrease lead time and manufacturing time. It may involve using less material and reducing wastage.

With the advent of 5G, manufacturers will be witnessing many new and exciting possibilities. The IoT of today has two game-changers that will affect the IoT of the future. The first game-changer is 5G, while edge technology is the other. Ten years ago, IoT was only a few devices sending data to the cloud for human interaction and analytics.

Now, there has been a substantial increase in the number of devices deployed and the amount of data traffic. In fact, with the humongous increase of data, many a time, it is not possible to send everything to the cloud. While 5G helps with the massive transfer of data, edge computing helps standardize the data and compute it locally, before the transfer.

Understanding Signal Relays

For upwards of 180 years, the relay has been one of the most valuable devices in the electrical and electronic industry. Their major function is remote control of a circuit from a distance, and this makes them significantly useful in a wide variety of applications. For instance, early computers had innumerable relays to conduct Boolean logic functions. A signal relay is a major subcategory of relays, with a specific and important function in the communications industry.

Like regular relays, signal relays are also electrically operated electromechanical switches. Their function is typically to control the current flow in a circuit. A control current flowing through a coil near the contacts generates a magnetic force, and this moves internal parts to open or close the contacts controlling a secondary circuit. This allows a small current in the coil to control a larger current in the secondary circuit.

Although the above functions are similar to those of a power relay, the design of a signal relay makes it more suitable for handling low currents and voltages, typically lower than 2 A, and voltage ratings between 5-30 VDC. The design of their contacts is suitable for handling low power.

Coming in small packages, signal relays are eminently suitable for mounting on PCBs or printed circuit boards. As their mechanical design makes them light, they offer significantly faster switching times as compared to power relays. Signal relays are far less expensive than solid-state relays and are impervious to voltage and current transients. They are also not susceptible to EMI or RFI. Since they are small and handle low power, they generate significantly lower amounts of heat than solid-state relays do, thereby requiring very few thermal management solutions in the PCB.

Like other electromechanical relays, signal relays also offer several benefits. These include simple design, robust operation, electrical isolation, cost savings, multiple feature options, and immunity to EMI and RFI. With a proper matching to meet the power requirements of the circuit, signal relays can offer additional benefits. These include affordable cost, small size, ease of use and operation, ability to withstand mechanical shock and vibrations, and high insulation between primary and secondary circuits.

For selecting a signal relay for a specific circuit, the designer must consider multiple factors. These include the maximum voltage that the relay must switch, the maximum current that the relay must switch, the contact resistance, the relay coil voltage, the relay coil current, the contact form, switching time, mounting type, operating temperature, and dielectric strength.

The above list is the minimum requirement for an engineer to start choosing a signal relay for their project. For instance, they can determine the necessary secondary voltage and current ratings from the maximum load that the circuit must switch. For a signal relay, it is essential that it switches a current lower than 2 A. Next, they must identify the number of circuits the relay must switch. That is, the number of poles on the relay contacts, and whether the arrangement should be normally closed or open. The next point to identify is the primary or control voltage that operates the relay, and whether this is AC or DC.

Solid State Active Cooling

Computex is a US startup that has developed a new cooling device. They call this an active solid-state cooling device, and it is very nearly the size of a regular SD card. It uses a variety of techniques to remove heat from small enclosed spaces. Made by Frore Systems, the new active solid-state cooling device is named AirJet.

Very close to the size of an SD card, about 2.8 x 27.5 x 41.5 mm, AirJet has tiny membranes vibrating at ultrasonic frequencies. According to Frore Systems, the membranes generate a strong airflow entering AirJet through inlet vents at its top. Inside the device, this airflow changes into high-velocity pulsating jets. AirJet further directs the air past a heat spreader at its base. As the air passes through AirJet, it acquires some heat from the device and carries it away as it moves out. According to Frore, the AirJet consumes only a single watt to operate, while moving 5.25 W worth of heat.

Although not very explicit, Frore’s explanation of the working mechanism says they made the vibrating membranes with techniques similar to those necessary for the production of screens and semiconductors. This is the reason for describing the device, as a solid-state cooler. Moreover, some workings of the AirJet are inspired by engineers’ methods to cool jet engine components.

At the Computex 2023 exhibition, Frore announced that their first customer for AirJet would be Zotac of Hong Kong. They will use it on their mini PC, which uses 8GB of RAM and an Intel i3 core inside a chassis measuring only 115 x 76 x 22 mm, slightly larger than a pack of playing cards.

According to Frore, they have designed AirJet specifically for tightly-packed devices with a lower number of CPUs and using passive heat management to cool. With a tiny active cooling device like AirJet, designers can contain the heat powerful components generate, or run more CUP cores at higher capacity for longer.

Frore’s prime targets are tablet computers and fanless laptops. Their demo device had a digital doorbell with an AirJet retrofitted. With this cooler running, they can enhance the processing of AI-infused video on the device.

Frore also have a professional model of the AirJet, and they predict it can move 10 watts of heat in advanced iterations. They also estimate they can double AirJet’s performance with each iteration, but for the time being, AirJet is unlikely to have adequate capacity to cool a server.

On the other hand, Frore envisages the role of cooling SSDs and similar memories for AirJet. This will likely work well for SSDs running hot, and CXL or Compute Express Link’s rising memory pooling. Therefore, they are considering having AirJets on SSDs for cooling arrays, and on other memory packages.

One limiting factor for AirJet is its need for air intake. However, Frore confidently claims AirJet can defeat dust. They do not claim the technology is waterproof, so application on smartphones is not under consideration, at least for now. But PCs can now chase the idea of no moving parts.

Cooling with Liquids

As data centers worldwide generate increasing amounts of heat as they consume ever more power, removing that heat is becoming a huge concern. As a result, they are turning to liquid cooling as an option. This became evident with the global investment company KKR acquiring CoolIT Systems, a company making liquid cooling gear for the past two decades. With this investment, CoolIT will be scaling up its operations for global customers in the data-center market. According to CoolIT, liquid cooling will play a critical role in reducing the emission footprint as data and computing need increase.

Companies investing in high-performance servers are also already investing in liquid cooling. These high-performance servers typically have CPUs consuming 250-300W and GPUs consuming 300-500W of power. When catering to demanding workloads such as AI training, servers often require up to eight GPUs, so they could be drawing 7-10kW per node.

Additionally, with data centers increasing their rack densities, and using more memories per node, along with higher networking performance, the power requirements of servers go up significantly. With the current trend to shift to higher chip or package power densities, liquid cooling is turning out to be the preferred option, as it is highly efficient.

Depending on the application, companies are opting for either direct contact liquid cooling, or immersion cooling. With direct contact liquid cooling, also known as direct-to-chip cooling, companies like Atos/Bull have built their own power-dense HPC servers. They pack six AMD Epyc sockets with maximum memory, 100Gbps networking, and NVMe storage, into a 1U chassis that they cool with a custom cooling manifold.

CoolIT supports direct cooling technology. They circulate a coolant, typically water, through metal plates, which they have attached directly to the hot component such as a GPU or processor. According to CoolIT, this arrangement is easier to deploy within existing rack infrastructures.

On the other hand, immersion cooling requires submerging the entire server node in a coolant. The typical coolant is a dielectric, non-conductive fluid. However, this arrangement calls for specialized racks. The nodes may have to be positioned vertically rather than being stacked horizontally. Therefore, it is easier to deploy this kind of system for newer builds of server rooms.

Cloud operators in Europe, such as OVHcloud, are combining both the above approaches in their systems. For this, they are attaching the water block to the CPU and GPU, while immersing the rest of the components in the dielectric fluid.

According to OVHcloud, the combined system has much higher efficiency compared to air cooling. They tested their setup, and it showed a partial power usage effectiveness or PUE rating of 1.004. This is the energy used for the cooling system.

However, the entire arrangement must have a proper approach, such as accounting for the waste heat. For instance, merely dumping the heat into a lake or river can be harmful. Liquid cooling does improve efficiency while also helping the environment, as it lowers the necessity to run compressor-based cooling. Instead, it is possible to use heat-exchanger technology to keep the temperature of the cooling loop low enough.