Author Archives: Andi

What are Cold-Cathode Devices?

Some devices, like thermionic valves, contain a cathode that requires heating up before the device can work. However, other devices do not require a hot cathode to function. These devices have two electrodes within a sealed glass envelope that contains a low-pressure gas like neon. With a sufficiently high voltage applied to the electrodes, the gas ionizes, producing a glow around the negative electrode, also known as the cathode. Depending on the gas in the tube, the cathode glow can be orange (for neon), or another color. Since these devices do not require a hot cathode, they are known as cold-cathode devices. Based on this effect, scientists have developed a multitude of devices.

The simplest of cold-cathode devices is the neon lamp. Before the advent of LEDs, neon lamps were the go-to lights. Neon lamps ionize at around 90 V, which is the strike voltage or breakdown voltage of the neon gas within the lamp. Once ionized, the gas will continue to glow at a voltage of around 65 V, which is its maintain or sustain voltage. This difference between the strike voltage and the sustain voltage implies the gas has a negative resistance region in the operating curve of the device. Hence, users often build a relaxation oscillator with a neon lamp, a capacitor, and a resistor.

Another everyday use for the neon lamp is as a power indicator for the AC mains. In practice, as an AC power indicator, the neon lamp requires a series resistance of around 220k – 1M ohms to limit the current flow through it, which also extends its life significantly. Since the electrodes in a neon lamp are symmetrical, using it in an AC circuit causes both electrodes to glow equally.

Neon signs, such as those in Times Square and Piccadilly Circus, also use the same effect. Instead of a short tube like in the neon lamp, neon signs use a long tube shaped in the specific design of the application. Depending on the display color, the tube may contain neon or another gas, together with a small amount of mercury. By applying a fluorescent phosphor coating to the inside of the glass tube, it is possible to produce still more colors. Due to the significant separation between the two electrodes in neon signs, they require a high strike voltage of around 30kV.

Another application of cold-cathode devices is the popular Nixie tube. Although seven-segment LED displays have now largely replaced them, Nixie tubes are still popular due to their effect as a glorified neon tube. Typically, they have ten electrodes, each in the shape of a numeral. In use, the circuit switches to the electrode required for displaying a particular number. The Nixie tube produces very natural-looking displays, hence, people find them beautiful and preferable to the stick-like seven-segment LED displays.

Photographers still use flash tubes to illuminate the scenes they are capturing. They typically use them as camera flashes and strobes. Flash tubes use xenon gas as their filling. Apart from the two regular main electrodes, flash tubes have a smaller trigger electrode near one or both the main electrodes. In use, the main electrodes have a few hundred volts between them. For triggering, the circuit applies a high-voltage pulse to the trigger electrode. This causes the gas between the two electrodes to ionize rapidly, giving off a bright white flash.

Sensors at the Heart of IoT

IoT, or the Internet of Things, depends on sensors. So much so, there would not be any IoT, IIoT, or for that matter, any type of Industry 4.0, at all, without sensors. As the same factors apply to all the three, we will use IoT as a simplification. However, some basic definitions first.

As a simple, general definition, IoT involves devices intercommunicating with useful information. As their names suggest, for IIoT and Industry 4.0, these devices are mainly located in factories. While IIoT is a network of interconnected devices and machines on a plant floor, Industry 4.0 goes a step further. Apart from incorporating IIoT, Industry 4.0 expands on the network, including higher level systems as well. This allows Industry 4.0 to process and analyze data from IIoT, while using it for a wider array of functions, including looping it back into the network for control.

However, the entire network has sensors as its basis, supplying it with the necessary raw data. Typically, the output from sensors is in the form of electrical analog signals, and IoT creates the fundamental distinction between data and information.

This distinction is easier to explain with an example. For instance, a temperature sensor, say, a thermistor, shows electrical resistance that varies with temperature. However, that resistance is in the form of raw data, in ohms. It has no meaning to us, until we are able to correlate it to degrees.

Typically, we measure the resistance with a bridge circuit, effectively converting the resistance to voltage. Next, we apply the derived voltage to a measuring equipment that we have calibrated to show voltage as degrees. This way, we have effectively converted data into information useful to us, humans. However, we can still use the derived voltage to control an electric heater or inform a predictive maintenance system of the temperature of a motor.

But information, once we have derived it from raw data, has almost endless uses. This is the realm of IoT, intercommunicating useful information among devices.

To be useful for IoT, we must convert the analog data from a sensor to a digital form. Typically, the electronics required for doing this is the ADC or Analog to Digital Converter. With IoT applications growing rapidly, users are also speeding up their networks, thereby handling even larger amounts of data, making them more power efficient.

Scientists have evolved a new method for handling large amounts of data that does not require the IoT devices to have large amounts of memory. The devices send their data over the internet to external data centers, the cloud. There, other computers handle the proper storing and analysis of the data. However, this requires higher bandwidth and involves latency.

This is where the smart sensor makes its entry. Smart sensors share the workload. A sensor is deemed smart when it is embedded within a package that has electronics for preprocessing, such as for signal conditioning, analog to digital conversion, and wireless transmission of the data. Lately, smart sensors are also incorporating AI or Artificial Intelligence capabilities.

What is Industrial Ethernet?

Earlier, we had a paradigm shift in the industry related to manufacturing. This was Industry 3.0, and, based on information technology, it boosted automation, enhanced productivity, improved precision, and allowed higher flexibility. Today, we are at the foothills of Industry 4.0, with ML or machine language, M2M or machine-to-machine communication, and smart technology like AI or artificial intelligence. There is a major difference between the two. While Industry 3.0 offered information to humans, allowing them to make better decisions, Industry 4.0 offers digital information to optimize processes, mostly without human intervention.

With Industry 4.0, it is possible to link the design office directly to the manufacturing floor. For instance, using M2M communications, CAD, or computer aided design can communicate directly to machine tools, thereby programming them to make the necessary parts. Similarly, machine tools can also provide feedback to CAD, sending information about challenges in the production process, such that CAD can modify them suitably for easier fabrication.

Manufacturers use the Industrial Internet or IIoT, the Industrial Internet of Things, to build their Industry 4.0 solutions. The network has an important role like forming feedback loops. This allows sensors to monitor processes in real-time, and the data thus collected can effectively control and enhance the operation of the machine.

However, it is not simple to implement IIoT. One of the biggest challenges is the cost of investment. But this investment can be justified through better design and manufacturing processes leading to cost savings through increased productivity and fewer product failures. In fact, reducing capital outflows is one way to accelerate adoption of Industry 4.0. Another way could be to use a relatively inexpensive but proven and accessible communication technology, like the Ethernet.

Ethernet is one of the wired networking options that is in wide use all over the world. It has good IP interoperability and huge vendor support. Moreover, POE or power over internet uses the same set of cables for carrying data as well as power to connected cameras, actuators, and sensors.

Industrial Ethernet, using rugged cables and connectors, builds on the consumer version of the Ethernet, thereby bringing a mature and proven technology to industrial automation. With the implementation of Industrial Ethernet, it is possible to not only transport vital information or data, but also remotely supervise machines, controllers, and PLCs on the shop floor.

Standard Ethernet protocol has high latency, mainly due to its tendency to lose packets. This makes it unsuitable for rapidly moving assembly lines that must run in synchronization. On the other hand, Industrial Ethernet hardware uses deterministic and low-latency industrial protocols, like PROFINET, Modbus TCP, and Ethernet/IP.

For Industrial Ethernet deployment, the industry uses hardened versions of the CAT 5e cable. For instance, the Gigabit Ethernet uses CAT 6 cable. For instance, the CAT 5e cable has eight wires formed into four twisted pairs. This twisting limits cross talk and signal interference, and each pair supports a duplex connection. Gigabit Ethernet, being a high-speed system, uses all four pairs for carrying data. For lower throughput, systems can use two twisted pairs, and the other two for carrying power or for conventional phone service.

What are Olfactory Sensors?

We depend on our five senses to help us understand the world around us. Each of the five senses—touch, sight, smell, hearing, and taste—contributes individual information to our brains, which then combines them to create a better understanding of our environment.

Today, with the help of technology like ML, or machine learning, and AI, or Artificial Intelligence, we can make complex decisions with ease. ML and AI also empower machines to better understand their surroundings. Equipping them with sensors only augments their information-gathering capabilities.

So far, most sensory devices, like proximity and light-based ones, remain limited as they need clear physical contact or line of sight to function correctly. However, with today’s technology trending towards higher complexity, it is difficult to rely solely on simple sensing technology.

Olfaction, or the sense of smell, functions by chemically analyzing low concentrations of molecules suspended in the air. The biological nose has receptors for this activity, which, on encountering these molecules, transmit signals to the parts of the brain that are responsible for the detection of smell. A higher concentration of receptors means higher olfaction sensitivity, and this varies between species. For instance, compared to the human nose, a dog’s nose is far more sensitive, allowing a dog to identify chemical compounds that humans cannot notice.

Humans have recognized this superior olfactory ability in dogs and put it to various tasks. One advantage of olfaction over that of sight is the former does not rely on line-of-sight for detection. It is possible to detect odors from unseen objects, which may be obscured, hidden from sight, or simply not visible. That means the olfactory sensor technology can work without requiring invasive procedures. That makes olfactory sensors ideally suited for a range of applications.

With advanced technology, scientists have developed artificial smell sensors to mimic this extraordinary natural ability. The sensors can analyze chemical signatures in the air, and thereby unlock newer levels of safety, efficiency, and early detection in places like the doctor’s office, factory floors, and airports.

The healthcare industry holds the most exciting applications for olfactory sensors. This is because medical technology depends on early diagnosis to provide the most effective clinical outcomes to patients. Conditions like diabetes and cancer cause detectable olfactory changes in the body’s chemistry. Using olfactory sensors to detect the changes in body odor, with their non-invasive nature, provides a critical early diagnosis that can significantly improve the chances of effective treatment and recovery.

The industry is also adopting olfactory sensors. Industrial processes often produce hazardous byproducts. With olfactory sensors around, it is easy to monitor chemical conditions in the air and highlight the buildup of harmful gases that can be dangerous beyond a certain level.

As the sense of smell does not require physical contact, it is ideal for detection in large spaces. For instance, olfactory sensors are ideal for airport security, where they can collect information about passengers and their belongings as they pass by. All they need is a database of chemical signatures along with processing power to analyze many samples in real-time.

High-Voltage TVS Diodes as IGBT Active Clamp

Most high-voltage applications like power inverters, modern electric vehicles, and industrial control systems use IGBTs or Insulated Gate Bipolar Transistors, as they offer high-efficiency switching. However, as power densities are constantly on the rise in today’s electronics, the systems are subjected to greater demands. This necessitates newer methods of control. Littelfuse has developed new TVS diodes as an excellent choice to protect circuits against overvoltages when IGBTs turn off.

Most electronic modules and converter circuits contain parasitic inductances that are practically impossible to eliminate. Moreover, it is not possible to ignore their influence on the system’s behavior. While commuting, the current changes as the IGBT turns off. This produces a high voltage overshoot at its collector terminal.

The turn-off gate resistance of the IGBT, in principle, affects the speed of commutation and the turn-off voltage. Engineers typically use this technique for lower power level handling. However, they must match the turn-off gate resistance for overload conditions, short circuits, and for a temporary increase in the link circuit voltage. In regular operation, the generation of the overshoot voltage typically increases the switching losses and turn-off delays in the IGBTs, reducing the usability and or efficiency of the module. Therefore, high-power modules cannot use this simple technique.

The above problem has led to the development of a two-stage turn-off, with slow turn-off and soft-switch-off driver circuits, which operate with a gate resistance that can be reversed. In regular operations, the IGBT is turned off with the help of a gate resistor of low ohmic value, as this minimizes the switching losses. For handing surge currents or short circuits, this is changed to a high ohmic gate resistor. However, this also means that normal and fault conditions must be detected reliably.

Traditionally, the practice is to use an active clamp diode to protect the semiconductor during the event of a transient overload. The high voltage causes a current flow through the diode until the voltage transient dissipates. This also means the clamping diode is never subjected to recurrent pulses during operation. The IGBT and its driver power limit the problem of repetitive operation, both absorbing the excess energy. The use of an active clamp means the collector potential is directly fed back to the gate of the IGBT vial an element with an avalanche characteristic.

The clamping element forms the feedback branch. Typically, this is made up of a series of TVS or Transient Voltage Suppression diodes. When the collector-emitter voltage of the IGBT exceeds the approximate breakdown voltage of the clamping diode, it causes a current flow via the feedback to the gate of the IGBT. This raises the potential of the IGBT, reducing the rate of change of current at the collector, and stabilizing the condition. The design of the clamping diode then determines the voltage across the IGBT.

As the IGBT operates in the active range of its output characteristics, the energy stored in the stray inductance of the IGBT is converted to heat. The clamping process goes on until the stray inductance is demagnetized. Therefore, several low-voltage TVS diodes in series or a single TVS diode rated for high voltage are capable of providing the active clamping solution.

E-Fuse Future Power Protection

High-voltage eMobility applications are on the rise. Traditionally, fuses are non-re-settable, and sometimes mechanical relays or contactors are used. However, that is now changing. Semiconductor-based re-settable fuses or eFuses are now replacing traditional fuses.

These innovative eFuses represent a significant trend in safeguarding hardware and users in high-voltage and high-power scenarios. Vishay has announced a reference design for an eFuse that can handle high power loads. They have equipped the new eFuse with SIC MOSFETs and a VOA300 optocoupler. The combination can handle up to 40 kW of continuous power load. The design is capable of operating at full power with minimal losses of lower than 30 W without active cooling. The eFuse incorporates important essential features like continuous current monitoring, a preload function, and rapid overcurrent protection.

Vishay has designed the eFuse to manage the safe connection and disconnection of a high-voltage power source. For instance, the eFuse can safely connect or disconnect various vehicle loads safely to and from a high-energy battery pack. The eFuse uses SIC MOSFETS as its primary switches, and these are capable of continuous operation up to 100 Amperes. The user can predefine a current limit. When the current exceeds this limit, the eFuse disconnects the load rapidly from the power source, safeguarding the user and the power source or battery pack. In addition, the presence of a short circuit or an excessive load capacitance during power-up causes the eFuse to initiate an immediate shutdown.

The basic design of the eFuse is in the form of a four-layer, double-sided PCB or printed circuit board of 150 mm x 90 mm. Each layer has thick copper of 70 µm thickness, as against 35 µm for regular PCBs. The board has some connectors extending beyond its edges. The top side of the PCB has all the high-voltage circuitry, control buttons, status LEDs, multiple test points, and connectors. The PCB’s bottom side has the low-voltage control circuitry. It is also possible to control the eFuse remotely via a web browser.

To ensure safety, the user must enable the low-voltage power supply in the first place. They can follow this up by enabling the high-voltage power supply on the input. For input voltages exceeding 50 V, an LED indicator lights up on the board. Vishay has added two sets of six SIC MOSFETS with three connected in parallel in a back-to-back configuration. This ensures the eFuse can handle current flow in both directions. A current-sensing shunt resistor, Vishay WSLP3921, monitors the current flowing to the load. Vishay has positioned the current sensing shunt resistor strategically between the two parallel sets of MOSFETs.

Vishay has incorporated convenient control options in the eFuse. Users can operate the control options via the push buttons on the PCB, or by using the external controller, Vishay MessWeb. Either way unlocks access to an expanded array of features. Alternately, the user can integrate the eFuse seamlessly into a CAN bus-based system. They can do this by using an additional chipset in conjunction with the MessWEB controller. Vishay claims to have successfully tested its reference eFuse design.

IoT Sensor Design

Individuals are progressively integrating electrical components into nearly every system possible, thereby imbibing these systems with a degree of intelligence. Nevertheless, to meet the intelligence requirements posed by diverse business applications, especially in healthcare, consumer settings, industrial sectors, and within building environments, there is a growing necessity to incorporate a multitude of sensors.

These sensors now have a common name—IoT or Internet of Things sensors. Typically, these must be of a diverse variety, especially if they are to minimize errors and enhance insights. As sensors gather data through sensor fusion, users build ML or Machine Learning algorithms and AI or Artificial Intelligence around sensor fusion concepts. They do this for many modern applications, which include advanced driver safety and autonomous driving, industrial and worker safety, security, and audience insights.

Other capabilities are also emerging. These include TSN or time-sensitive networking, with high-reliability, low-latency, and network determinism features. These are evident in the latest wireless communication devices conforming to modern standards for Wi-Fi and 5G. To implement these capabilities, it is necessary that sensor modules have ultra-low latency at high Throughput. Without reliable sensor data, it is practically impossible to implement these features.

Turning any sensor into an IoT sensor requires effectively digitizing its output while deploying the sensor alongside communication hardware and placing the combination in a location suitable for gathering useful data. This is the typical use case for sensors in an industrial location, suitable for radar, proximity sensors, and load sensors. In fact, sensors are now tracking assets like autonomous mobile robots working in facilities.

IoT system developers and sensor integrators are under increasing pressure to reduce integration errors through additional processing circuits. Another growing concern is sensor latency. Users are demanding high-resolution data accurate to 100s of nanoseconds, especially in proximity sensor technologies following the high growth of autonomous vehicles and automated robotics.

Such new factors are leading to additional considerations in IoT sensor design. Two key trends in the design of sensors are footprint reduction and enhancing their fusion capabilities. As a result, designers are integrating multiple sensors within a single chip. This is a shift towards a new technology known as SoC or system-on-chip.

Manufacturers are also using MEMS technology for fabricating sensors for position and inertial measurements such as those that gyroscopes and accelerometers use. Although the MEMS technology has the advantage of fabrication in a semiconductor process alongside digital circuits, there are sensors where this technology is not viable.

Magnetic sensors, high-frequency sensors, and others need to use ferromagnetic materials, metastructures, or other exotic semiconductors. Manufacturers are investing substantially towards the development of these sensor technologies using SiP or system-in-package modules with 2D or 2.5D structures, to optimize them for use in constrained spaces and to integrate them to reduce delays.

Considerations for modern sensor design also include efforts to reduce intrinsic errors that affect many sensor types like piezoelectric sensors. Such sensors are often prone to RF interference, magnetic interference, electrical interference, oscillations, vibration, and shock. Designers mitigate the effect of intrinsic errors through additional processing like averaging and windowing.

The above trends are only the tip of the iceberg. There are many other factors influencing the growing sensor design complexity and the need to accommodate better features.

What is DFMEA?

If you are just entering the world of design, you will have to face a session of DFMEA some time or the other. DFMEA is an acronym for Design Failure Mode and Effects Analysis. In recent years, corporate settings are using DFMEA, a subset of FMEA or failure mode and effects analysis, as a valuable tool. It helps engineers spot potential risks in product design before they make any significant investments.

Engineers are using DFMEA as a systematic tool for mapping the early-warning system of a product. They use it to make sure the product functions not only as they intend it to, but also to keep users happy. It is like taking a peek into the future, catching any design flaws before they cause any major damage. Simply put, DFMEA helps to check the overall design of products and components, figuring out anything that might go wrong, and the way to fix it. This tool is specifically useful in industries involved with manufacturing, where it is important to prevent failure.

To use DFMEA effectively, the designer must look for potential design failures, observing them from all angles. Here is how they do it.

They first look for a failure mode, which essentially means how the design could possibly fail. For instance, your computer might freeze up when you open too many programs, which is one mode or type of failure.

Then they look for why the failure mode should happen. This could be due to a design defect, or a defect in the quality, system, or application of the part.

Next, the designers look for an effect of the failure. That is, what happens when there is a failure. In our example, a frozen computer can lead to a frustrated user.

In the last stage, designers look for the severity of the failure. They estimate how bad the failure could be for safety, quality, and productivity. Designers typically look for the worst-case scenarios.

To put it in a nutshell, DFMEA helps engineers figure out not only potential issues, but also the consequences of the failures. This way, they can prevent failures from happening in the first place.

However, DFMEA is never a one-man show. Rather, it is a team effort. Typically, the team has about 4 to 6 members—those who are fully knowledgeable about the product—and led by a product design engineer. The team members could include engineers with material background, and those from product quality, testing, and analysis. There may be people from other departments, like logistics, service, and production.

DFMEA is an essential tool in any design process. However, it is a crucial tool in industries handling new products and technology. This includes industries such as software, healthcare, manufacturing, industrial, defense, aerospace, and automotive. DFMEA helps them locate potential failure modes, reducing risks involved with introducing new technologies and products.

The entire DFMEA exercise is a step-by-step process and the team must think through each step thoroughly before they movie on to the next. It is essential they look for and identify the failure, and find out its consequences, before finding out ways to prevent it from happening.

What is Voice UI?

Although we usually talk to other humans, our interactions with non-animated objects are almost always silent. That is, until the advent of the Voice User Interface or Voice UI or VUI technology. Now, Voice UI has broken this silent interaction between humans and machines. Today, we have virtual assistants and voice-controlled devices like Siri, Google Assistant, Hound, Alexa, and many more. Most people who own a voice-controlled device say it is like talking to another person.

So, what is Voice UI? The Voice UI technology has made it possible for humans to interact with a device or an application through voice commands. As we are increasingly using digital devices, screen fatigue is something we have all experienced often. This has led to the development of a voice user interface. The advantages are numerous—primarily, hands-free operation and control over the device or application without having to stare at a screen. Leading five companies of the world, Amazon, Google, Microsoft, Apple, and Facebook, have developed their respective voice-activated AI assistants and voice-controlled devices.

Whether it is a voice-enabled mobile app, an AI assistant, or a voice-controlled device like a smart speaker, voice interactions and interfaces have become incredibly common. For instance, according to a report, 25% of adults in the US own a smart speaker, and 33% of the US population use their voice for searching online.

How does this technology work? Well, under the hood, there are several Artificial Intelligence technologies at work, such as Automatic Speech Recognition, Name Entity Recognition, and Speech Synthesis. The VUI speech components and the backend infrastructure are backed by AI technologies and typically reside in a public or private cloud. It is here that the VUI processes the speech and voice of the user. After deciphering and translating the user’s intent, the AI technology returns a response to the device.

The above is the basics of the Voice UI technology, albeit in a nutshell. For a better user experience, most companies also include additional sound effects and a graphical user interface. The sound effects and visuals assist the user in knowing whether the device is listening to them, or processing before responding and responding.

Today Voice UI technology is widespread, and it is available in many day-to-day devices like Smartphones, Desktop Computers, Laptops, Wearables, Smartwatches, Smart TVs, Sound Systems, Smart Speakers, and the Internet of Things. However, everything has advantages and disadvantages.

First, the advantages. VUI is faster than having to type the commands in text, and more convenient. Not many are comfortable typing commands, but almost all can use their voice to request a task from the VUI device. Voice commands, being hands-free, are useful while cooking or driving. Moreover, you do not need to face or look at the device to send voice commands.

Next, the disadvantages. There are privacy concerns, as a neighboring person can overhear your commands. AI technology is still in its infancy, and is prone to misinterpretation or being inaccurate, especially when differentiating homonyms like ‘their’ and ‘there’. Moreover, voice assistants may find it difficult to decipher commands in noisy public places.

What is UWB Technology?

UWB is the acronym for Ultra-Wideband, a 132-year-old communications technology. Engineers are revitalizing this old technology for connecting wireless devices over short distances. Although more modern technologies like Bluetooth are available for the purpose, industry observers are of the opinion that UWB can prove to be more versatile and successful than Bluetooth is. According to them, UWB has superior speed, uses less power, is more secure, provides superior device ranging and location discovery, and is cheaper than Bluetooth is.

Therefore, companies are researching and investing in UWB technology. This includes names like Xtreme Spectrum, Bosch, Sony, NXP, Xiaomi, Samsung, Huawei, Apple, Time Domain, and Intel. As such, Apple is already using UWB chips in their iPhone 11. This is allowing Apple obtain superior positioning accuracy and ranging, as it uses time of flight measurements.

Marconi’s first man-made radio using spark-gap transmitters used UWB for wireless communication. The government banned UWB signals for commercial use in 1920. However, since 1992, the scientific community started paying greater attention to the UWB technology.

UWB or Ultra-Wideband technology offers a protocol for short-range wireless communications, similar to what Wi-Fi or Bluetooth offer. It uses short pulse radio waves over a spectrum of frequencies that range from 3.1 to 10.5 GHz and does not require licensing for its applications.

In UWB, the bandwidth of the signal is equal to or larger than 500 MHz or is fractionally greater than 20% of the fractional bandwidth around the center frequency. Compared to conventional narrowband systems, the very wide bandwidth of UWB signals leads to superior performance indoors. This is because the wide bandwidth offers significantly greater immunity from channel effects when used in dense environments. It also allows very fine time-space resolutions resulting in highly accurate indoor positioning of the UWB devices.

As its spectral density is low, often below environmental noise, UWB ensures the security of communications with a low probability of signal detection. UWB allows transmission at high data rates over short distances. Moreover, UWB systems can comfortably co-exist with other narrowband systems already under deployment. UWB systems allow two different approaches for data transmission.

The first approach uses ultra-short pulses—often called Impulse radio transmission—in the picosecond range, covering all frequencies simultaneously. The second approach uses the OFDM or orthogonal frequency division multiplexing for subdividing the entire UWB bandwidth to a set of broadband channels.

While the first approach is cost-effective, there is a degradation of the signal-to-noise ratio. Impulse radio transmission does not involve a carrier; therefore, it uses a simpler transceiver architecture as compared to traditional narrowband transceivers. For instance, the UWB antenna radiates the signal directly. An example of an easy to generate UWB pulse is using a Gaussian monocycle or one of its derivatives.

The second approach offers better performance as it significantly uses the spectrum more effectively. Although the complexity is higher as the system requires more signal processing, it substantially improves the data throughput. However, the higher performance comes at the expense of higher power consumption. The application defines the choice between the two approaches.