Monthly Archives: February 2024

What is DFMEA?

If you are just entering the world of design, you will have to face a session of DFMEA some time or the other. DFMEA is an acronym for Design Failure Mode and Effects Analysis. In recent years, corporate settings are using DFMEA, a subset of FMEA or failure mode and effects analysis, as a valuable tool. It helps engineers spot potential risks in product design before they make any significant investments.

Engineers are using DFMEA as a systematic tool for mapping the early-warning system of a product. They use it to make sure the product functions not only as they intend it to, but also to keep users happy. It is like taking a peek into the future, catching any design flaws before they cause any major damage. Simply put, DFMEA helps to check the overall design of products and components, figuring out anything that might go wrong, and the way to fix it. This tool is specifically useful in industries involved with manufacturing, where it is important to prevent failure.

To use DFMEA effectively, the designer must look for potential design failures, observing them from all angles. Here is how they do it.

They first look for a failure mode, which essentially means how the design could possibly fail. For instance, your computer might freeze up when you open too many programs, which is one mode or type of failure.

Then they look for why the failure mode should happen. This could be due to a design defect, or a defect in the quality, system, or application of the part.

Next, the designers look for an effect of the failure. That is, what happens when there is a failure. In our example, a frozen computer can lead to a frustrated user.

In the last stage, designers look for the severity of the failure. They estimate how bad the failure could be for safety, quality, and productivity. Designers typically look for the worst-case scenarios.

To put it in a nutshell, DFMEA helps engineers figure out not only potential issues, but also the consequences of the failures. This way, they can prevent failures from happening in the first place.

However, DFMEA is never a one-man show. Rather, it is a team effort. Typically, the team has about 4 to 6 members—those who are fully knowledgeable about the product—and led by a product design engineer. The team members could include engineers with material background, and those from product quality, testing, and analysis. There may be people from other departments, like logistics, service, and production.

DFMEA is an essential tool in any design process. However, it is a crucial tool in industries handling new products and technology. This includes industries such as software, healthcare, manufacturing, industrial, defense, aerospace, and automotive. DFMEA helps them locate potential failure modes, reducing risks involved with introducing new technologies and products.

The entire DFMEA exercise is a step-by-step process and the team must think through each step thoroughly before they movie on to the next. It is essential they look for and identify the failure, and find out its consequences, before finding out ways to prevent it from happening.

What is Voice UI?

Although we usually talk to other humans, our interactions with non-animated objects are almost always silent. That is, until the advent of the Voice User Interface or Voice UI or VUI technology. Now, Voice UI has broken this silent interaction between humans and machines. Today, we have virtual assistants and voice-controlled devices like Siri, Google Assistant, Hound, Alexa, and many more. Most people who own a voice-controlled device say it is like talking to another person.

So, what is Voice UI? The Voice UI technology has made it possible for humans to interact with a device or an application through voice commands. As we are increasingly using digital devices, screen fatigue is something we have all experienced often. This has led to the development of a voice user interface. The advantages are numerous—primarily, hands-free operation and control over the device or application without having to stare at a screen. Leading five companies of the world, Amazon, Google, Microsoft, Apple, and Facebook, have developed their respective voice-activated AI assistants and voice-controlled devices.

Whether it is a voice-enabled mobile app, an AI assistant, or a voice-controlled device like a smart speaker, voice interactions and interfaces have become incredibly common. For instance, according to a report, 25% of adults in the US own a smart speaker, and 33% of the US population use their voice for searching online.

How does this technology work? Well, under the hood, there are several Artificial Intelligence technologies at work, such as Automatic Speech Recognition, Name Entity Recognition, and Speech Synthesis. The VUI speech components and the backend infrastructure are backed by AI technologies and typically reside in a public or private cloud. It is here that the VUI processes the speech and voice of the user. After deciphering and translating the user’s intent, the AI technology returns a response to the device.

The above is the basics of the Voice UI technology, albeit in a nutshell. For a better user experience, most companies also include additional sound effects and a graphical user interface. The sound effects and visuals assist the user in knowing whether the device is listening to them, or processing before responding and responding.

Today Voice UI technology is widespread, and it is available in many day-to-day devices like Smartphones, Desktop Computers, Laptops, Wearables, Smartwatches, Smart TVs, Sound Systems, Smart Speakers, and the Internet of Things. However, everything has advantages and disadvantages.

First, the advantages. VUI is faster than having to type the commands in text, and more convenient. Not many are comfortable typing commands, but almost all can use their voice to request a task from the VUI device. Voice commands, being hands-free, are useful while cooking or driving. Moreover, you do not need to face or look at the device to send voice commands.

Next, the disadvantages. There are privacy concerns, as a neighboring person can overhear your commands. AI technology is still in its infancy, and is prone to misinterpretation or being inaccurate, especially when differentiating homonyms like ‘their’ and ‘there’. Moreover, voice assistants may find it difficult to decipher commands in noisy public places.

What is UWB Technology?

UWB is the acronym for Ultra-Wideband, a 132-year-old communications technology. Engineers are revitalizing this old technology for connecting wireless devices over short distances. Although more modern technologies like Bluetooth are available for the purpose, industry observers are of the opinion that UWB can prove to be more versatile and successful than Bluetooth is. According to them, UWB has superior speed, uses less power, is more secure, provides superior device ranging and location discovery, and is cheaper than Bluetooth is.

Therefore, companies are researching and investing in UWB technology. This includes names like Xtreme Spectrum, Bosch, Sony, NXP, Xiaomi, Samsung, Huawei, Apple, Time Domain, and Intel. As such, Apple is already using UWB chips in their iPhone 11. This is allowing Apple obtain superior positioning accuracy and ranging, as it uses time of flight measurements.

Marconi’s first man-made radio using spark-gap transmitters used UWB for wireless communication. The government banned UWB signals for commercial use in 1920. However, since 1992, the scientific community started paying greater attention to the UWB technology.

UWB or Ultra-Wideband technology offers a protocol for short-range wireless communications, similar to what Wi-Fi or Bluetooth offer. It uses short pulse radio waves over a spectrum of frequencies that range from 3.1 to 10.5 GHz and does not require licensing for its applications.

In UWB, the bandwidth of the signal is equal to or larger than 500 MHz or is fractionally greater than 20% of the fractional bandwidth around the center frequency. Compared to conventional narrowband systems, the very wide bandwidth of UWB signals leads to superior performance indoors. This is because the wide bandwidth offers significantly greater immunity from channel effects when used in dense environments. It also allows very fine time-space resolutions resulting in highly accurate indoor positioning of the UWB devices.

As its spectral density is low, often below environmental noise, UWB ensures the security of communications with a low probability of signal detection. UWB allows transmission at high data rates over short distances. Moreover, UWB systems can comfortably co-exist with other narrowband systems already under deployment. UWB systems allow two different approaches for data transmission.

The first approach uses ultra-short pulses—often called Impulse radio transmission—in the picosecond range, covering all frequencies simultaneously. The second approach uses the OFDM or orthogonal frequency division multiplexing for subdividing the entire UWB bandwidth to a set of broadband channels.

While the first approach is cost-effective, there is a degradation of the signal-to-noise ratio. Impulse radio transmission does not involve a carrier; therefore, it uses a simpler transceiver architecture as compared to traditional narrowband transceivers. For instance, the UWB antenna radiates the signal directly. An example of an easy to generate UWB pulse is using a Gaussian monocycle or one of its derivatives.

The second approach offers better performance as it significantly uses the spectrum more effectively. Although the complexity is higher as the system requires more signal processing, it substantially improves the data throughput. However, the higher performance comes at the expense of higher power consumption. The application defines the choice between the two approaches.

Touch-sensing HMI

The key element in the consumer appeal of wearable devices lies in their touch-sensing HMI or human-machine interface—it provides an intuitive and responsive way of interacting via sliders and touch buttons in these devices. Wearable devices include earbuds, smart glasses, and smartwatches with a small touchscreen.

An unimaginable competition exists in the market for such types of wearable devices, continually driving innovation. The two major features over which manufacturers typically battle for supremacy and which matter particularly to consumers are—run time between battery charges, and the form factor. Consumers typically demand a long run-time between charges, and they want a balance between convenience, comfort, and a plethora of features, along with a sleek and attractive design. This is a considerable challenge for the designers and manufacturers.

For instance, while the user can turn off almost all functions in a wearable device like a smartwatch for long periods between user activity, the touch-sensing HMI must always remain on. This is because the touch intentions of the user are randomly timed. They can touch-activate their device any time they want to—there is no pattern that allows the device to know in advance when the user is about to touch-activate it.

Therefore, the device must continuously scan to detect a touch for the entire time it is powered up, leading to power consumption by the HMI subsystem, even during the low-power mode. The HMI subsystem is, therefore, a substantial contributor to the total power consumed by the device. Reducing the power consumption of the touch system can result in a substantial increase in the run-time between charges of the device.

Most wearable devices use the touch-sensing HMI as a typical method for waking up from a sleep state. These devices generally conserve power by entering a low-power touch detect function that operates it in a deep sleep mode. In this mode, the scanning takes place at a low refresh rate suitable for detecting any kind of touch event. In some devices, the user may be required to press and hold a button or tap the screen momentarily to wake the device.

In such cases, the power consumption optimization and the amount of power saved significantly depends on how slow it is possible to refresh the sensor. Therefore, it is always a tradeoff between a quick response to user touch and power consumption by the device. Moreover, touch HMI systems are notorious for the substantial amount of power they consume.

Commercial touch-sensing devices typically use microcontrollers. Their architecture mostly has a CPU with volatile and non-volatile memory support, an AFE or analog front-end to interface the touch-sensing element, digital logic functions, and I/Os.

The scanning operation typically involves CPU operation for initializing the touch-sensing system, configuring the sensing element, scanning the sensor, and processing the results to determine if a touch event has occurred.

In low-power mode, the device consumes less power as the refresh rate of the system reduces. This leads to fewer scans occurring each second, only just enough to detect if a touch event has occurred.