Tag Archives: MEMS

MEMS Microphones for Laptops

In the recent pandemic, people took to virtual meetings using their computers and laptops. However, most often, the substandard quality of the audio led to an unsatisfactory experience. That’s because people’s expectation of consumer devices has increased significantly. They want to make high-quality calls from wherever they may be. They could be on the street, in an open office, or in a crowd.

People expect their devices to have ANC or active noise cancellation, transparent hearing, and voice control. However, these require more sophisticated and better microphones.

For instance, people engaged in video conferencing, want their experience to be as close as possible to a real, face-to-face meeting. Now this depends, to a great extent, on the audio quality, and people expect high-quality audio without having to put on additional devices, such as headphones.

Achieving good quality audio requires the application to use a combination of high-quality hardware and software. It is necessary to have algorithms that provide good noise reduction, reverberation reduction, enhanced beamforming, and good direction of arrival detection. These are essential for high-fidelity transmission and audio recording in a wide variety of conditions and situations. However, the quality of the entire chain is dependent on the primary sensor, the microphone.

Most good-quality microphones that have been around for a long time tend to be large and expensive, and primarily confined to audio recording studios. However, consumer equipment typically requires microphones that are mass-produced with tight manufacturing tolerances, and physically small. MEMS microphones suit these requirements very well.

For a microphone to be qualitatively described as good, it must possess some performance characteristics. The first among them is the SNR or signal-to-noise ratio. SNR of a microphone is the difference in its output between a standard reference signal input and the microphone’s self-noise. All elements of a microphone contribute to its self-noise. This includes the MEMS sensor, package, ASIC, and the sound ports.

SNR is important when the microphone is detecting sounds or voices that are at a distance from it. This is because the input signal decreases with distance, as sound level halves at twice the distance. Further, signal losses can come from the system design, room conditions, and sound channel. A good microphone with a large SNR can capture sound even at large distances. This helps with capturing input signals for algorithms, voice commands, and recording.

The next important characteristic of a good microphone is its THD or total harmonic distortion. This refers to the presence of harmonics in the microphone’s output that are not present in the input signal. The point where the THD reaches 10% is important as this represents AOP or acoustic overload point. At this point, the output from the microphone contains clipping and other noises, because the signal is too loud for the microphone.

The latest MEMS technology allows building of studio-level microphones for consumer devices like laptops. This has been aptly demonstrated by Infineon using their XENSIV IM69D127 and IM73A135 MEMS microphones that are allowing OEMs to build laptops with the next level of audio quality.

LIDAR with MEMS

A solid-state Lidar chip works by emitting laser light from an optical antenna. A tiny switch turns the antenna on and off. The light reflects off the sample and the same antenna captures it. For 3D imaging, the Lidar chip has an array of such antennae. The switches sequentially turn them on in the array.

An array of MEMS switches for high-resolution solid-state Lidar can reduce its cost significantly. This allows the solid-state Lidar to match other inexpensive chip-based radar and camera systems. This removes the major barrier to the adoption of Lidar for autonomous vehicles.

At present, autonomous highway driving and collision avoidance systems use inexpensive chip-based radar and cameras as their mainstream building blocks. At the same time, Lidar navigation systems, being mechanical devices and unwieldy, are also expensive, costing thousands of dollars.

However, researchers at the University of California, Berkeley, are working on a new type of high-resolution Lidar chip and this may be the game-changer. The new Lidar uses an FPSA or Focal Plane Switch Array. This is a matrix of micron-scale antennas made of semiconductors. Just like the sensors in digital cameras do, these antennas also gather light. While smartphone cameras have impressive resolutions of millions of pixels, FPSA on the new Lidar has a resolution of only 16,384 pixels. However, this is substantially larger than the 512 pixels that the current FPSA has.

Another advantage of the new FPSA is its design is scalable. Using the same CMOS or Complementary Metal-Oxide Semiconductor technology that produces computer processors, it is possible to reach megapixel sizes easily. According to the researchers, this can lead to a new generation of 3D sensors that are not only immensely powerful but also low-cost. Such powerful 3D sensors will be of great use in smartphones, robots, drones, and autonomous cars.

Surprisingly, the new Lidar system works the same way as mechanical Lidar systems do.  Mechanical Lidars also use lasers for visualizing objects situated several hundreds of yards away, even when they are in the dark. They also generate high-resolution 3D maps for the artificial intelligence in a vehicle for distinguishing between obstacles like pedestrians, bicycles, and other vehicles. But for over a decade, researchers have tried to put these capabilities on a chip, without success, up until now.

The idea is to illuminate a large area. However, the larger the area illuminated, the weaker is the light intensity. This does not allow reaching a reasonable distance. Therefore, researchers had to make a trade-off for maintaining the light intensity. They had to reduce the area that their laser light was illuminating.

The new Lidar has an FPSA matrix consisting of tiny optical transmitters. Each transmitter has a MEMS switch that can rapidly turn on and off. This allows time for the waveguides to move from one position to another while allowing channeling the entire laser power through a single antenna at any time.

Routing light in communications networks commonly uses MEMS switches. The researchers have used this technology for the first time for Lidar. Compared to the thermo-optic switches that the mechanical Lidar uses, the MEMs switches have the advantage of being much smaller, consuming far less power, operating faster, and performing with significantly lower light losses.

Motion Tracking through the MC3672

This year, the MSEC or MEMS & Sensors Executive Congress had mCube exhibiting their incredibly small and low-power MC3672, an inertial sensor product. This is a three-axis accelerometer, and its size is only 1.1 x 1.3 mm. This tiny WLCSP packaged device is a low parasitic unit, with enormous possibilities of unobtrusive use as low power motion tracking in wearable design, and in a completely new set of applications in future.

Recently, mCube acquired Xsens and they were able to couple a sensor fusion software to their tiny accelerometer. This gave them the ability to sense body motion and capture solutions for health, entertainment, and fitness. The combination also allows them to control and stabilize inertial measurement units in industrial applications.

Almost all are aware of MEMS motion sensors, as tablets, smartphones, and wearables use them popularly. Use of the MC3672 accelerometer will generate more applications for these devices in the future. This could include new areas such as in the medical world, related to prevention and diagnostics of illness. For instance, when visually inspecting the throat, stomach, or intestines of a patient, physicians often need to perform invasive and unpleasant procedures.

In future, patients would be able to swallow a camera-pill that can wirelessly beam images from the inside of the body to a display for the physician to view. Miniature motion sensing incorporated within the camera-pill could allow medical practitioners to navigate the pill effectively by actuating and controlling it. This would allow them to monitor its location and orientation in real-time as it passed through the body. Images captured by the camera would enable precise diagnosis and investigation of any problems.

According to Dr. Sanjay Bhandari, Senior VP of mCube, a plethora of new applications will come into life based on the granular, precise measurement of motion, orientation, tilt, and heading of the sensor. For instance, some applications will be able to capture motion data to communicate it to cloud software services, and ultimately sharing it with networked systems for monitoring and analysis.

Achieving most of the envisaged applications is only possible with motion-sensing systems that are extremely small and drain very little power from an arrangement of energy harvesting or a battery.

Along with the low power consumption and small system size, all components in the system must adhere to the design features. The sensor interface uses Silicon and CMOS-based circuits that filter, amplify, and fit the analog to digital processors to work its magic.

The monolithic, single-chip design by mCube integrates both the CMOS and the MEMS within a clever extension using a standard CMOS-base process. This is a reliable procedure for handling high volumes and produces excellent yields. Within the chip, mCube has interconnected the MEMS and the CMOS very efficiently.

In future, mCube plans to integrate BLE or Bluetooth Low Energy into the MCU in its SIP package—they want to realize IoMT-on-a-Chip. They have protected their technology by 100 approved patents.

The acquisition of Xsens brought to mCube the 3D technology to track motion in the sensor world—a high-precision module for sensing motion in 9 degrees of freedom.

Redefining MEMS with 3-D Interactive Projection

At the Mobile World Congress 2017, Bosch introduced a combo of a micro-scanner and projector, capable of turning any surface into a virtual user interface. Bosch is the world’s oldest and biggest manufacturer of micro-electro-mechanical systems (MEMS), and in its combo projector, it is using infrared for scanning and laser for projecting.

Currently, engineers are using MEMS devices for a variety of gadgets, especially where a human-machine interface (HMI) is necessary. These include in-car heads-up displays, infotainment, medical devices, robotics, industrial equipment, and on the factory floor. With the new microscanner BML050, Bosch Sensortech has extended its portfolio to include optical microsystems. This move also expands Bosch’s market from being only a component supplier to becoming a system supplier as well.

To sense where the user has placed his finger on the projected interactive display, the new Bosch BML050 uses a combination on two MEMS scanning mirrors. One of the mirrors tracks the X-direction, while the other scans the Y-direction. Sensing the finger also makes use of an infrared laser and an RGB laser.

The integrated module for infrared, red, green, and blue (IR-RGB) is only 6 mm high, and is capable of HD resolution. The two MEMS scanning mirrors are capable of both projecting images as well as collecting the reflected light, thereby determining accurately where the user’s finger is touching the projected image. According to Bosch, this technique is adaptable to 3-D scanning as well, where they can apply time-of-flight calculations using the reflected light from an object.

A major advantage over Digital Light Processors (DLPs), the Bosch laser-based MEMS scanner is always in focus, even when the projection surface is uneven. According to Stefan Finkbeiner, the chief executive officer of Bosch Sensortech, DLPs require thousands of mirrors that need focusing, and the entire outfit is expensive.

At present, the reference design of the Bosch BML050, although containing all the technicalities for use in almost any application, is much larger than the expected OEM circuit board. Finkbeiner informs that despite this, customers are already integrating the BML050 into their products, and they will be in the market by Christmas this year.

The BML050 has a two-mirror system, with one hinged in the X-direction and the other in the Y-direction. The mirror system projects from the module, which measures only 6 x 24 mm, and uses 30-lumen lasers. This arrangement allows Bosch to alter the size when using low-power lasers, or when using high-power lasers for instance, for industrial sized images. The reference design for the BML050 contains all required drivers and processors. This includes ASICs for driving the mirrors, processing the video, managing all colors, managing the system and laser power with two PMICs.

According to Finkbeiner, the two-MEMS mirror architecture is very simple to integrate. Therefore, for the future, Bosch is planning to use a sealed module design after further miniaturization. The design will then be suitable for use in tiny gadgets such as for IoTs and smartphones. Very soon, you may find virtual human-machine interfaces on everything from toys to industry equipment on the factory floor, robotics, medical devices, and infotainment such as in in-car heads-up displays.