Monthly Archives: October 2022

What is Pulsed Electrochemical Machining?

With pulsed electrochemical machining, it is possible to achieve high-repeatability production parts. This advanced process is a completely non-thermal and non-contact material removal process. It is capable of forming small features and high-quality surfaces.

Although its fundamentals remain the same as electromechanical machining or ECM, the variant, PECM or the pulsed electrochemical machining process is newer and more precise, using a pulsed power supply. Similar to other machining processes, like EDM and more, there is no contact between the tool and the workpiece. Material very close to the tool dissolves by an electrochemical process and the flowing electrolyte washes away the by-products. The remaining part takes on a shape like an inverse of the tool.

The PECM process has some key terms that it uses routinely. The first is the cathode—representing the tool in the process. Other names for the cathode are tool and electrode. Typically, its manufacturing is specific for each application and its design is the inverse of the shape the process wants to achieve.

The second is the anode—it refers to the workpiece or the material that the process works on. Therefore, the anode can assume many forms. This can include a cast piece of near net shape, wrought stock, an additively manufactured or 3D printed part, a part conventionally machined, and so on.

The third key item is the electrolyte—referring to the working fluid in the PECM process that flows between the cathode and the anode. Commonly a salt-based solution, the electrolyte serves two purposes. It allows electrical current to flow between the cathode and anode. It also flushes away the by-products of the electrochemical process such as hydroxides of the metals dissolved by the process.

The final key item is the gap—this is also the IEG or inter-electrode gap and is the space between the anode and the cathode. This space is an important part of the process, and it is necessary to maintain this gap during the machining process as the gap is a major contributor to the performance of the entire process. The PECM process allows gap sizes as small as 0.0004” to 0.004” (10 µm to 100 µm). This is the primary reason for PECM’s capability to resolve minuscule features in the final workpiece.

Compared to other manufacturing processes, pulsed electrochemical machining has some important advantages:

The pulsed electrochemical machining process of metal removal is unaffected by the hardness of the material it is removing. Moreover, the hardness also does not affect the speed of the process.

Being a non-thermal and non-contact process, PECM does not change the properties of the material on which it is working.

As it is a metal removal process using electrochemical means, it does not leave any burrs behind. In fact, many deburring processes use this method as a zero-risk method of machining to avoid burrs.

It is possible to achieve highly polished surfaces with the PECM process. For instance, surfaces of 0.2-8 µin Ra (0.005-0.2 µm Ra) are very common in a variety of materials.

Because of non-contact, there is no wear and tear in the cathode, and it has practically near-infinite tool life.

PECM can form an entire surface of a part at a time. The tool room can easily parallel it to manufacture multiple parts in a single operation.

The Battery of the Future — Sodium Ion

Currently, Lithium-ion batteries rule the roost. However, there are several disadvantages to this technology. The first is that Lithium is not an abundant material. Compared to this, Sodium is one of the most abundantly available materials on the earth, therefore it is cheap. That makes it the most prime promising candidate for new battery technology. So far, however, the limited performance of Sodium-ion batteries has not allowed them a large-scale integration into the industry.

PNNL, or the Pacific Northwest National Laboratory, of the Department of Energy, is about to turn the tides in favor of Sodium-ion technology. They are in the process of developing a Sodium-ion battery that has excelled in laboratory tests for extended longevity. By ingeniously changing the ingredients of the liquid core of the battery, they have been able to overcome the performance issues that have plagued this technology so far. They have described their findings in the journal Nature Energy, and it is a promising recipe for a battery type that may one day replace Lithium-ion.

According to the lead author of the team at PNNL, they have shown in principle that Sodium-ion battery technology can be long-lasting and environmentally friendly. And all this is due to the use of the right salt for the electrolyte.

Batteries require an electrolyte that helps in keeping the energy flowing. By dissolving salts in a solvent, the electrolyte forms charged ions that flow between the two electrodes. As time passes, the charged ions and electrochemical reactions helping to keep the energy flowing get slower, and the battery is unable to recharge anymore. In the present Sodium-ion battery technologies, this process was happening much faster than in Lithium-ion batteries of similar construction.

A battery loses its ability to charge itself through repeated cycles of charging and discharging. The new battery technology developed by PNNL can hold its ability to be charged far longer than the present Sodium-ion batteries can.

The team at PNNL approached the problem by first removing the liquid solution and the salt solution in it and replacing it with a new electrolyte recipe. Laboratory tests proved the design to be durable, being able to hold up to 90 percent of its cell capacity even after 300 cycles of charges and discharges. This is significantly higher than the present chemistry of Sodium-ion batteries available today.

The present chemistry of the Sodium-ion batteries causes the dissolution of the protective film on the anode or the negative electrode over time. The film allows Sodium ions to pass through while preserving the life of the battery, and therefore, quite significantly critical. The PNNL technology protects this film by stabilizing it. Additionally, the new electrolyte places an ultra-thin protective layer on the cathode or positive electrode, thereby helping to further contribute to the stability of the entire unit.

The new electrolyte that PNNL has developed for the Sodium-ion batteries is a natural fire-extinguishing solution. It also remains non-changing with temperature excursions, making the battery operable at high temperatures. The key to this feature is the ultra-thin protection layer the electrolyte forms on the anode. Once formed, the thin layer remains a durable cover, allowing the long cycle life of the battery.

Battery Charge Controller Modules

Charge controllers prevent batteries from overcharging and over-discharging. Recharging batteries too often or discharging them excessively can harm them. By managing the battery voltage and current, a battery charge controller module can keep the battery safe for a long time.

Charge controllers protect the battery and allow it to deliver power while maintaining the efficiency of the charging system. Battery charge controller modules only work with DC loads connected to the battery. For AC loads, it is necessary to connect an inverter after the battery.

Charge controllers have a few key functions. They must protect the battery from overcharging, and they do this by controlling the charging voltage. They protect the battery from unwanted and deep discharges. As the battery voltage falls below a pre-programmed discharge value, the charge controller automatically disconnects the load. When the battery connects to a solar photovoltaic module, the charge controller prevents reverse current flow through the PV modules at night. The charge controller also provides information about the state of charge of the battery.

Various types of charge controllers are available in the market. Two of the most popular are the PWM or Pulse Width Modulation type and the MPPT or Maximum Power Point Tracking type. Although an MPPT type charge controller is more expensive than a PWM type, the former helps to boost the performance of solar arrays connected to the batteries. On the other hand, a PWM-type charge controller can extend the lifecycle of a battery bank at the expense of a lower performance from the solar panel. Typically, charge controllers exhibit a lifespan of about 15 years.

The XH-M60x family of battery charge controller modules is among the low-cost varieties offered by Chinese manufacturers. The most popular among them is the XH-M603. As the XH-M603 is not an overall charger, it is necessary to connect the battery to an external charger compatible to the battery.

The user can set optimal thresholds for initiating and terminating the battery charging cycle—making the charge controller a rather universal type, suitable for a wide range of batteries. Therefore, when the battery voltage falls below the set start value, the onboard relay starts routing the charging voltage from the charger to the battery. As soon as the battery voltage exceeds the stop value, the relay terminates the charging process.

XH-M603 battery charge controller module has a three-digit display on board for indicating the battery voltage. The display resolution is 0.1V. It accepts batteries with voltages between 12 and 24 V, Whereas it accepts input charging voltages between 10 and 30 VDC. The control precision is 0.1 V, while the DC voltage output tolerance is ±0.1 VDC. The overall dimensions of the module are 82 x 58 x 18 mm.

A small microcontroller controls the module, which has two voltage regulator chips onboard. There are a bunch of discrete components, including two micro-switches, a screw terminal block, an electromagnetic display, a three-digit Led display, and one red LED.

The charger connection to the module must maintain proper polarity. Likewise, the battery polarity is also important for the proper functioning of the module.

SD-Card Level Translator with Smaller Footprint

Interfacing SD-Cards with their host computers almost always requires a voltage-level translator. This is because most of these memory cards operate at signal levels between 1.7 and 3.6 VDC, while their hosts operate with nominal supply levels varying from 1.1 to 1.95 VDC. Until now, bidirectional level translators for SD 3.0 memory cards were WLCSP devices with 20 bumps or solder balls. The new translator for SD 3.0 memory cards, from Nexperia, is a WLCSP device with 16 bumps. Its footprint is 40% smaller than the 20-bump types. The new device, NXS0506UP, supports multiple data and clock transfer rates for signaling levels that the SD 3.0 standard specifies. Moreover, this includes the SDR104 mode for ultra-high speeds.

While shifting the voltage levels between the memory card and the I/O lines of the host device, the new translator operates at clock frequencies of up to 208 MHz and handles data rates up to 104 Mbps. To automatically detect whether data and control signals should move from the host to the memory card (card write mode) or from the memory card to the host (card read mode), the device uses its integrated auto-directional control.

Apart from the auto-directional control, Nexperia has substantially reduced the BOM cost of their NXS0506UP device by integrating the pull-up and pull-down resistors. These resistors are essential in establishing the voltage levels at the chip IO lines, and discrete resistors push up the BOM cost. In addition, the input/output driver stages of the device have inbuilt EMI filters that help to reduce interference. Moreover, Nexperia has provided robust ESD protection, according to the IEC 61000-4-2 standard, on all the side pins of the memory card. While the 16-bump WLCSP has a physical measurement of just 1.45 x 1.45 x 0.45 mm, its operating temperature ranges from -40 °C to +85 °C.

The NXS0506UP SD card voltage level translator is useful for consumer devices like automotive systems, medical devices, notebook PCs, digital cameras, and smartphones. The SD 3.0 standard compatible level translator is a bidirectional dual supply device with auto-direction control. Nexperia has designed the card for interfacing cards operating from 1.7 to 3.6 VDC levels to hosts with a nominal supply voltage between 1.1 to 1.95 VDC. Apart from the SD 3.0 standard, the device also supports the SDR12, SDR25, DDR50, SDR50, SDR104, and the SD 2.0 standards at default speeds of 25 MHz and high speeds of 50 MHz. The device offers built-in protection from ESD and EMI conforming to the IEC 61000-4-2, level 4 standard.

There are several benefits to using the NXS0506UP SD card voltage level translator. The primary benefit is it supports a maximum clock rate of 208 MHz. It translates voltage levels for default and high-speed modes. It has auto-direction sensing for data and controls. The power consumption is low, while the device integrates pull-up and pull-down resistors. The integrated EMI filter suppresses higher harmonics at digital IOs. Buffers at the IO lines help to keep ESD stresses away with the zero-clamping concept. The 16-bump WLCSP package offers a pitch of 0.35 mm.

Always-On Battery Life Improvement with ML Chip

Devices that must always remain on must conserve power in every way possible to extend their battery life. Their design starts with the lowest possible system power and every mode of operation must consume the bare minimum power necessary for operation. Now, with AML100, an analog machine learning or ML chip from Aspinity, it is possible to cut down the system power by up to 95%, even when the system always remains on. AML100 consumes less than 100 µA of always-on system power. This opens new types of products for biometric monitoring, preventive and predictive maintenance, commercial and home security, and voice-first systems, all of which are systems that continuously must remain switched on.

The movement of data to and from a system consumes power. One of the most effective ways of reducing power consumption is, therefore, minimizing the amount and movement of data through a system. The AML100 transfers the machine learning workload to the analog domain where it consumes ultra-low levels of power. The chip determines the relevancy of data with highly accurate and near-zero power. By intelligently reducing the data at the sensor, while it is still in the analog mode, the tiny ML chip keeps its digital components in low-power mode. Only when it detects important data, does the chip allows the analog data to enter the digital domain. This eliminates the extra power consumption in digitizing, processing, and transmission of irrelevant analog data.

The AML100 consists of an array of independent analog blocks configurable to be fully programmable with software. This allows the chip to support a wide range of functions that include sensor interfacing and machine learning. It is possible to program the device in the field, using software updates, or with newer algorithms that target other always-on applications. When it is in always-sensing mode, the chip consumes a paltry 20 µA, and it can support four analog sensors in different combinations like accelerometers, microphones, and so on.

At present, Aspinity is producing the AML100 chip in sampling numbers for key customers. The chip has dimensions of 7 x 7 mm and is housed in a 48-pin QFN package. Aspinity has slated the volume production of this chip for the fourth quarter of 2022 and is presently offering two evaluation kits with software. One of the kits is for glass breakage and T3/T4 alarm tone detection, while the other is for voice detection with preroll collection and delivery. Other kits with software for other applications are also available from Aspinity on request.

AML100 is the first product in the AnalogML family from Aspinity. It detects sensor-driven events from raw, analog sensors by classifying the data. It allows developers to design edge-processing devices with significantly low power consumption, those that are always on. The device has a unique RAMP or Reconfigurable Analog Modular Processor technology platform that allows the AML100 to reduce the always-on system power by more than 95%. This enables designers to build ultra-low power always-on solutions with edge-processing techniques for biomedical monitoring, predictive and preventive maintenance for industrial equipment, acoustic event monitoring applications, and voice-driven systems.

Advantages of Additive Manufacturing

Additive manufacturing, like those from 3-D printers, allows businesses to develop functional prototypes quickly and cost-effectively. They may require these products for testing or for running a limited production line, allowing quick modifications when necessary. This is possible because these printers allow effortless electronic transport of computer models and designs. There are many benefits of additive manufacturing.

Designs most often require modifications and redesign. With additive manufacturing, designers have the freedom to design and innovate. They can test their designs quickly. This is one of the most important aspects of making innovative designs. Designers can follow the creative freedom in the production process without thinking about time and or cost penalties. This offers substantial benefits over the traditional methods of manufacturing and machining. For instance, over 60% of designs undergoing tooling and machining also undergo modifications while in production. This quickly builds up an increase in cost and delays. With additive manufacturing, the movement away from static design gives engineers the ability to try multiple versions or iterations simultaneously while accruing minimal additional costs.

The freedom to design and innovate on the fly without incurring penalties offers designers significant rewards like better quality products, compressed production schedules, more product designs, and more products, all leading to greater revenue generation. Regular traditional methods of manufacturing and production are subtractive processes that remove unwanted material to achieve the final design. On the other hand, additive manufacturing can build the same part by adding only the required material.

One of the greatest benefits of additive manufacturing is streamlining the traditional methods of manufacturing and production. Compressing the traditional methods also means a significant reduction in environmental footprints. Taking into account the mining process for steel and its retooling process during traditional manufacturing, it is obvious that additive manufacturing is a sustainable alternative.

Traditional manufacturing requires tremendous amounts of energy, while additive manufacturing requires only a relatively small amount. Additionally, waste products from traditional manufacturing require subsequent disposal. Additive manufacturing produces very little waste, as the process uses only the needed materials. An additional advantage of additive manufacturing is it can produce lightweight components for vehicles and aircraft, which further mitigates harmful fuel emissions.

For instance, with additive manufacturing, it is possible to build solid parts with semi-hollow honeycomb interiors. Such structures offer an excellent strength-to-weight ratio, which is equivalent to or better than the original solid part. These components can be as much as 60% lighter than the original parts that traditional subtractive manufacturing methods can produce. This can have a tremendous impact on fuel consumption and the costs of the final design.

Using additive manufacturing also reduces the risk involved and increases predictability, resulting in improving the bottom line of a company. As the manufacturer can try new designs and test prototypes quickly, digital additive manufacturing modifies the earlier unpredictable methods of production and turns them into predictable ones.

Most manufacturers use additive manufacturing as a bridge between technologies. They use additive technology to quickly reach a stable design that traditional manufacturing can then take over for meeting higher volumes of production.

New Requirements for Miniature Motors

Innovations in the field of robotics are resulting in the emergence of smarter and smaller robotic designs. Sensor technologies and vision systems use robotic applications in warehousing, medical, process automation, and security fields. Disruptive technologies are creating newer opportunities for solving unique challenges with miniature motors. These include the robotic market for efficient and safe navigation through warehouses, predictable control of surgical tools, and the necessary endurance for completing lengthy security missions.

With industries transitioning to applications requiring collaborative robotics, they need systems that are more compact, dexterous, and mobile. Tasks that earlier required handling by human hands are driving the need for miniaturized motors for mimicking both the capability and size of the hands that accomplished the work.

For instance, multiple jointed solutions representing the torso, elbow, arm, wrist, etc. require small, power-dense motors for reducing the overall weight and size. Such compact solutions not only improve usability but also improves autonomy and safety, resulting in faster reaction times due to lower inertia. Therefore, robotic grippers, exoskeleton, prosthetic arms, and humanoid robots require small, high-power density motors. Power density is the amount of power a motor generates per unit of its volume. A motor that generates greater amounts of power in a small package, has a higher power density. This is an important factor when there is a space constraint, or where a high level of output is necessary when a limited space.

Manufacturers can miniaturize motors with high power densities. Alternately, they can increase the capability of current designs. Both options are critical in reducing the space that motion elements occupy. High efficiency is necessary to obtain the maximum power possible from a given design. Here, BLDC or brushless DC motors and slot-less motor designs in combination with efficient planetary gearboxes can offer powerful solutions in small packages. Brushless solutions are flexible enough for engineering them to meet customer requirements like long and skinny designs, or short, flat, low-profile configurations.

Smooth operation and dynamic response can result in these miniature motors being dexterous and agile. Slot-less BLDC motors achieve this by eliminating detent torque, thereby providing precise dynamic motion with their lower inertia. Applications requiring high dynamics, such as pick-and-place systems and delta robots, must be able to accelerate/decelerate quickly and constantly. Coreless DC motors and stepper motors with disc magnets are suitable for applications requiring critical characteristics like high acceleration as they have very low inertia.

Ironless brushed DC motors with their high efficiency, are the best choice for battery-powered mobile applications to extend their operational life between charges. Several robotic applications now run on battery power, thereby requiring motors with high efficiency for longer running times. Other applications require high torque at low speeds, and it is possible to achieve this by matching the motor with a high-efficiency gearbox.

Some applications that are inhospitable to humans may need robot systems capable of enduring difficult environmental conditions. This may include tremendous vibration and shock. With proper motor construction, it is possible to improve their reliability and durability when operating under such conditions.

What is Moisture Sensing?

In agriculture, where plants require watering, people often use time-controlled watering methods. While this method irrigates plants in fixed time intervals, there is no way to assess whether there is an actual need for watering. Most often, this leads to either over-watering or under-watering. Depending on weather conditions, over-watering may cause harmful water-logging, while under-watering may lead to dry stress for plants. People often mitigate the amount of water flow by using a rain sensor or controlling the water delivery based on online weather information.

Using a sensor to sense the amount of moisture in the soil and control the watering works much better. Not only does the latter method allow optimal water supply to the plants, but it also substantially reduces water consumption. Threshold levels can be set using various strategies. Any experienced gardener can recognize the start of dry stress when they notice the plants wilting slightly, or when the leaf edges start rolling.

Excessive watering does not increase the moisture in the soil, rather, it results in saturation. By delaying watering for a while, the excess water usually drains off into the subsoil. Most gardeners set the lower threshold to about 60% of the saturation level. They observe the plants and the moisture trend during the early phases to adjust the threshold levels to allow an economical and optimal automatic watering. It is necessary to position the sensor properly in the soil near the root area. For drip irrigation, it is possible to achieve a good soil moisture cycle by placing the sensor somewhere where it is neither too far nor too close to the drip location.

For working with moisture sensors, it is necessary to consider sensor selection and integration. This is because moisture sensors have two functions in a watering system. The first is they provide information about the current status of the watering. The second is they help to economically use water as a resource. Many plants are intolerant to dry soil as they are to water-logging. Moreover, while there are numerous types of moisture sensors, they have different ways of working and their life spans vary widely.

The presence of moisture in the soil can have different definitions. There is the volumetric water content, which represents the amount of water in the total amount of soil. In natural soil, the maximum volumetric water content is about 50-60 % and represents the amount of water filling all the airspace in the soil. Organic materials and peat can hold more water.

The relative mass of water in the soil is its gravimetric water content. This is determined chiefly by weighing the soil sample before and after drying. As it requires a laboratory to do the measurement, this method is not suitable for continuous monitoring in the field.

A variety of principles of physical measurements form the basis of many types of electrical sensors for measuring soil moisture. The most inexpensive is the measurement of electrical conductivity. Next are low-frequency capacitive sensors. High-frequency capacitive sensors are more expensive. Then there are tensiometers that measure the soil moisture tension.

What are Power Factor Controllers?

Connecting an increasing number of electrically-powered devices to the grid is leading to a substantial distortion of the electrical grid. This, in turn, is causing problems in the distribution of the electrical network. Therefore, most engineers resort to advanced power factor correction circuitry in power supply designs that can meet power factor standards strictly for mitigating these issues.

Most power factor correction methods popularly use the boost PFC topology. However, with the advent of wide band-gap semiconductors, like silicon carbide and gallium nitride, it is becoming easier to implement bridge-less topologies also, including the column PFC. With advanced column controllers, it is now possible to simplify the control over complex designs of the interleaved column PFC.

At present, the interleaved boost PFC is the most common topology that engineers use for power factor correction. They use a rectifying diode bridge for converting AC voltage to DC. A boost converter then steps up the DC voltage to a higher value, while converting it to a sinusoidal waveform. This has the effect of reducing the ripple on the output voltage while offering a sinusoidal waveform for the current.

Although it is possible to achieve power factor correction with only a single boost converter, engineers often use two or more converters in parallel. Each of these converters is given a phase shift to improve its efficiency and reduce the ripple on the input current. This topology is known as interleaving.

With new families of semiconductors, especially the silicon carbide type, creating power switches offers substantial improvements in their thermal and electrical characteristics. Using the new type of semiconductors, it is becoming possible to integrate the rectification and boost stages, along with two switching branches for operating at different frequencies. This is the bridge-less column PFC topology.

One of the two branches is the slow branch, and it commutates at the grid frequency, typically 50 or 60 Hz. This branch operates with traditional silicon switches, while it is primarily responsible for input voltage rectification. The second branch is the fast branch and is responsible for stepping up the voltage. Switching at very high frequencies like 100 kHz, this branch places great thermal and electrical strain on the semiconductor switches. For safe and efficient performance, engineers prefer to use wide band-gap semiconductor switches, such as GaN and SiC MOSFETs, in the second branch.

The bridge-less column PFC topology improves the performance in comparison with the interleaved boost converter. But the control circuitry is more complex due to the presence of additional active switches. Therefore, engineers often integrate the column controller to mitigate the issue.

It is possible to add more high-frequency branches for improving the efficiency of the bridge-less column PFC. Such additions help in reducing the ripple on the output voltage of the converter while distributing the power requirements equally among the branches. Such an arrangement minimizes the overall costs while reducing the layout.

Although it is possible to reach general conclusions about each topology by comparing their performance, this largely depends on the device selection and its operating parameters. Therefore, designers must be careful in considering the design for implementation.

Mobile Screen Over Your Eyes

It is no longer necessary to hold a mobile with the hands. How? Thanks to AR or Artificial Reality eyeglasses, it is now possible to transfer the screen of the mobile device to the lens of a pair of eyeglasses. Although this technology was around for a while, the glasses were rather cumbersome and bulky.

Now, Trilite Technologies of Vienna, Australia, has a newer approach to AR glasses that make them look and feel just like normal glasses. According to their CEO, Dr. Peter Weigand, so far, there have been three types of light engine technologies.

The first was the LCoS technology. This is a panel-based technology, and it requires optics with illumination. It is necessary to have a nice, homogeneous, and smooth illumination, and a waveguide must carry the input image. This is not a very efficient technique, and it has a number of optical elements, making it bulky.

The other was the MicroLED display technology. This is semiconductor-based and far superior to a reflective display as it emits its own light. However, it is still a challenge to make the display visible in outdoor applications. And, the two-dimensional display does not scale up when moving to higher FOV or Fields of View and higher resolutions.

The third was the Laser beam scanner technology. This has the highest level of miniaturization. Typically, it has an RGB laser module with three separately mounted lasers as the red, blue, and green light sources. Optics follows the laser module to merge the three beams of lasers into a single ray. A set of MEMS mirrors follows, generating the image scans for the eyeglass display. Two mirrors are necessary, one for the X- and the other for the Y-axis.

According to Weigand, the latest generation of these scanners uses a single MEMS mirror that can move in both x and y-direction. This two-dimensional mirror helps to achieve a lighter and smaller product.

Electronics create the image for display by modulating the lasers. Coupling the image to an optical waveguide allows it to be sent to the display. For this, the laser scanner uses relay optics, a rather large optical element. Coupling the laser beam scanner into the input coupler of the waveguide directly, allows the display engine to be made to a small size. The entire arrangement contains the collimating optics, the MEMS mirrors, and the three lasers.

Trilite Technologies is able to make very small scanners because of its design philosophy. They have designed their scanner such that software rather than hardware handles many of the scanning functions. The other significant contribution to the small size comes from using a single two-axis MEMS mirror rather than one mirror for each axis.

The waveguide contains the optical input coupler as an integral part. This coupler has a pattern of microstructure gratings on its surface, allowing light to enter. The output side, where the light emerges from the waveguide, also has a similar structure. The waveguide conveys the image to the lens and, at the same time, combines the incoming with the generated digital light, allowing the user to see both the digital image and the real-world scene through the eyeglass lens.