Category Archives: Newsworthy

Future Factories with 5G

The world is moving fast. If you are a manufacturer still using Industry 3.0 today, you must move your shop floor forward to Industry 4.0 for being relevant tomorrow, and plan for Industry 5.0, for being around next week. 5G may be the answer to how you should make the changes to move forward.

There has been a sea of changes in technology, for instance, manufacturing uses edge computing now, and the advent of the Internet of Things has led to the evolution.

At present, we are in the digital transformation era, or Industry 4.0. People call it by different names like intelligent industry, factory of the future, or smart factory. These terms indicate that we are using a data-oriented approach. However, it is also necessary to collaborate with the manufacturing foundation. This approach is the Golden Triangle, based on three main systems—PLM or Product Lifecycle Management, MES or Manufacturing Execution Systems, and ERP or Enterprise Resource Planning.

With IoT, there is an impact on the manufacturing process, depending on the data collected in real-time, and its analytics. Of course, it complements existing systems that are more oriented to the process. Therefore, rather than replace, IoT actually complements and collaborates with the existing systems that help the manufacturer to manage the shop floor.

IoT is one of the major driving factors behind the movement that we know as Industry 4.0. One of its key points is to enable massive automation. This requires data collection from the shop floor and moving it to the cloud. On the other end, it will need advanced analytics. This is necessary to optimize the workflow and processes that the manufacturer uses. After the lean strategy, there will be a kind of lean software, acting as one more step towards process optimization within the company and on the shop floor.

However, manufacturers will face several challenges as they grow and scale up their IoT initiatives. These will include automation, flexibility, and sustainability. Of these, automation is already the key topic in the market—the integration of technologies to automate the various manufacturing processes.

The next in line is flexibility. For instance, if you are manufacturing a product in a line, it takes a long time to change that line for making another product.

The last challenge is rather vast. Sustainability means making manufacturing cost-effective by improving the processes and the efficiency of the equipment. It may be necessary to minimize energy consumption, and decrease lead time and manufacturing time. It may involve using less material and reducing wastage.

With the advent of 5G, manufacturers will be witnessing many new and exciting possibilities. The IoT of today has two game-changers that will affect the IoT of the future. The first game-changer is 5G, while edge technology is the other. Ten years ago, IoT was only a few devices sending data to the cloud for human interaction and analytics.

Now, there has been a substantial increase in the number of devices deployed and the amount of data traffic. In fact, with the humongous increase of data, many a time, it is not possible to send everything to the cloud. While 5G helps with the massive transfer of data, edge computing helps standardize the data and compute it locally, before the transfer.

Solid State Active Cooling

Computex is a US startup that has developed a new cooling device. They call this an active solid-state cooling device, and it is very nearly the size of a regular SD card. It uses a variety of techniques to remove heat from small enclosed spaces. Made by Frore Systems, the new active solid-state cooling device is named AirJet.

Very close to the size of an SD card, about 2.8 x 27.5 x 41.5 mm, AirJet has tiny membranes vibrating at ultrasonic frequencies. According to Frore Systems, the membranes generate a strong airflow entering AirJet through inlet vents at its top. Inside the device, this airflow changes into high-velocity pulsating jets. AirJet further directs the air past a heat spreader at its base. As the air passes through AirJet, it acquires some heat from the device and carries it away as it moves out. According to Frore, the AirJet consumes only a single watt to operate, while moving 5.25 W worth of heat.

Although not very explicit, Frore’s explanation of the working mechanism says they made the vibrating membranes with techniques similar to those necessary for the production of screens and semiconductors. This is the reason for describing the device, as a solid-state cooler. Moreover, some workings of the AirJet are inspired by engineers’ methods to cool jet engine components.

At the Computex 2023 exhibition, Frore announced that their first customer for AirJet would be Zotac of Hong Kong. They will use it on their mini PC, which uses 8GB of RAM and an Intel i3 core inside a chassis measuring only 115 x 76 x 22 mm, slightly larger than a pack of playing cards.

According to Frore, they have designed AirJet specifically for tightly-packed devices with a lower number of CPUs and using passive heat management to cool. With a tiny active cooling device like AirJet, designers can contain the heat powerful components generate, or run more CUP cores at higher capacity for longer.

Frore’s prime targets are tablet computers and fanless laptops. Their demo device had a digital doorbell with an AirJet retrofitted. With this cooler running, they can enhance the processing of AI-infused video on the device.

Frore also have a professional model of the AirJet, and they predict it can move 10 watts of heat in advanced iterations. They also estimate they can double AirJet’s performance with each iteration, but for the time being, AirJet is unlikely to have adequate capacity to cool a server.

On the other hand, Frore envisages the role of cooling SSDs and similar memories for AirJet. This will likely work well for SSDs running hot, and CXL or Compute Express Link’s rising memory pooling. Therefore, they are considering having AirJets on SSDs for cooling arrays, and on other memory packages.

One limiting factor for AirJet is its need for air intake. However, Frore confidently claims AirJet can defeat dust. They do not claim the technology is waterproof, so application on smartphones is not under consideration, at least for now. But PCs can now chase the idea of no moving parts.

Cooling with Liquids

As data centers worldwide generate increasing amounts of heat as they consume ever more power, removing that heat is becoming a huge concern. As a result, they are turning to liquid cooling as an option. This became evident with the global investment company KKR acquiring CoolIT Systems, a company making liquid cooling gear for the past two decades. With this investment, CoolIT will be scaling up its operations for global customers in the data-center market. According to CoolIT, liquid cooling will play a critical role in reducing the emission footprint as data and computing need increase.

Companies investing in high-performance servers are also already investing in liquid cooling. These high-performance servers typically have CPUs consuming 250-300W and GPUs consuming 300-500W of power. When catering to demanding workloads such as AI training, servers often require up to eight GPUs, so they could be drawing 7-10kW per node.

Additionally, with data centers increasing their rack densities, and using more memories per node, along with higher networking performance, the power requirements of servers go up significantly. With the current trend to shift to higher chip or package power densities, liquid cooling is turning out to be the preferred option, as it is highly efficient.

Depending on the application, companies are opting for either direct contact liquid cooling, or immersion cooling. With direct contact liquid cooling, also known as direct-to-chip cooling, companies like Atos/Bull have built their own power-dense HPC servers. They pack six AMD Epyc sockets with maximum memory, 100Gbps networking, and NVMe storage, into a 1U chassis that they cool with a custom cooling manifold.

CoolIT supports direct cooling technology. They circulate a coolant, typically water, through metal plates, which they have attached directly to the hot component such as a GPU or processor. According to CoolIT, this arrangement is easier to deploy within existing rack infrastructures.

On the other hand, immersion cooling requires submerging the entire server node in a coolant. The typical coolant is a dielectric, non-conductive fluid. However, this arrangement calls for specialized racks. The nodes may have to be positioned vertically rather than being stacked horizontally. Therefore, it is easier to deploy this kind of system for newer builds of server rooms.

Cloud operators in Europe, such as OVHcloud, are combining both the above approaches in their systems. For this, they are attaching the water block to the CPU and GPU, while immersing the rest of the components in the dielectric fluid.

According to OVHcloud, the combined system has much higher efficiency compared to air cooling. They tested their setup, and it showed a partial power usage effectiveness or PUE rating of 1.004. This is the energy used for the cooling system.

However, the entire arrangement must have a proper approach, such as accounting for the waste heat. For instance, merely dumping the heat into a lake or river can be harmful. Liquid cooling does improve efficiency while also helping the environment, as it lowers the necessity to run compressor-based cooling. Instead, it is possible to use heat-exchanger technology to keep the temperature of the cooling loop low enough.

Sustainable Medical Wearables

Most of us use fitness and medical wearables today. These amazing devices can sustain the rigors of everyday life. A fall to the floor or a drop of liquid does not keep these devices from working or fulfilling their purpose.

Whether consumers use them for everyday purposes, or diagnostic testing requires using them for limited use, medical wearables must be capable of withstanding general wear and tear, disinfecting, and cleaning. Multiple patients may use the same medical wearable In the course of their lifetime. So, if they are to last, they must be capable of inherently protecting themselves from contaminants and liquids, radiation, and impact from hard objects and surfaces.

For many people, a wearable is either a FitBit or an Apple Watch. However, apart from these popular consumer wearables, there are several other small medical devices that are necessary for evaluating patients and monitoring them for short- or long-term, such as for heart-related disorders like cardiac arrhythmias.

Transdermal patches are wearable devices that deliver extended-release medication. Typically, patients wear them for long periods, requiring them to balance breathability with adhesive hold, while being comfortable for the wearer. It is also necessary that the materials in the device do not interact negatively with the pharmaceuticals and medicines that the device will be delivered to the wearer.

Nowadays, it is common to find microfluidic diagnostic devices such as for diabetic testing with blood glucose strips. These track biomarkers like glucose and pH levels at molecular levels of sweat, blood, and other fluids. These small and intricate devices with sensors typically collect data from the wearer. Such devices contain printed flex circuits, sensors, electrodes, and batteries.

There is a broad category known as wearable biometric monitoring devices for tracking biometric markers. These markers include parameters like heart rate, temperature, movement, and respiration, among many others. These are devices like blood pressure monitors, continuous glucose monitors, and sleep trackers. Apart from the need for these devices to stick to the user with adhesives, they possess the functionality and the ability to wirelessly transmit information that it collects. Apart from the standard internal components like flex circuits, sensors, electrodes, and batteries, these devices also contain devices and circuits for wireless transmission and reception.

Medical wearables typically contain critical components like sealing gaskets. These are necessary not only for keeping out unwanted contaminants, but they must also be safe for contact with the human body and skin—depending on where they are located in use. Manufacturers use 3D printers for fabricating orthotics and prosthetics, and they use fireproof sealing gaskets. However, sealing gaskets used in medical wearables are made of different materials, as they must come in contact with bodily fluids, human tissue, drugs, and medical fluids.

May requirements guide the selection of materials for medical wearables. For instance, sealing gaskets may need to conduct electricity, be flame-resistant, and at the same time, be protective against electrostatic discharge. Typically, they belong to a wide spectrum of elastomers and polymers. Whatever the material used, it must be durable. For medical wearables, it is essential they consider how people live, accommodate the shape of the wearer, and do it for long periods continuously.

Edge Computing for Smart Homes

Designing devices for smart homes can be a huge challenge. There are numerous limitations to be overcome, but the sensible use of sensors can help smooth the way. Devices for smart homes can relate to lighting, kitchen appliances, security, heating/cooling, and entertainment. With the advancement in technology for smart homes, engineers need to be more intuitive and develop more capabilities for making products more intelligent. Among the expectations from homeowners are faster response, higher performance, higher levels of accuracy, and easier integration of multiple devices.

Today, there are widely varying intelligent devices in modern intelligent home technology. Most often, these produce massive amounts of data that must be processed quickly. Although there are limitations to improving the technology for smart homes, contextual data can address them by using a combination of sensors, with the device processing them on the field rather than doing it in a cloud.

Just like in any technology, the fundamental systems and components of smart home technology are also constantly improving. Engineers must continuously develop better solutions as soon as they recognize the limitations. Among the several limitations, three major ones that plague smart home technology are accuracy, latency, and compatibility.

Accuracy is an extremely important factor in smart home technology. Everything affects accuracy, starting from the sensors that are necessary to collect data to the artificial intelligence tools that process the data. This is leading engineers to collect data using innovative new approaches, including using algorithms to combine multiple sensors for processing the data so that they can achieve a higher level of accuracy.

For instance, a smart home security system may involve radar, computer vision, and sound detection to accurately predict the presence of a person. Engineers are also using AI tools and algorithms for finding the most efficient methods of processing data. However, this leads us to the next limitation—latency.

Latency negatively impacts any type of smart home technology. Home security, for instance, needs collecting data from multiple sensors, and analyzing them as fast as possible. The impact on latency increases as there is an increase in the data gathered, transmitted, and processed.

With end users having multiple smart systems working concurrently, compatibility challenges are bound to crop up, impacting overall performance and functionality. This is one reason for engineers to move their focus from systems that depend on platforms, manufacturers, and devices. Rather, they are moving more of the functionality and processing to the devices themselves. This is where edge computing is helping them—addressing all three challenges at a time.

In smart home technology, edge computing transfers most of the processing and analysis from the cloud to the device itself. In simpler terms, data processing takes place as close to the sensor as possible.

For instance, home security cameras are notorious for reporting false positives, eventually causing the owners to ignore accurate alerts. One way of improving the accuracy is by improving the quality of the lens and image sensors. The other is by using edge computing to differentiate between the movement of animals and leaves being moved by winds.

DAWSense Turns Any Surface into an Input Device

Although we are used to traditional interfaces like touchscreens and keyboards, interfacing with computers has traversed a long distance over the years. Now, it is possible to turn any surface into an input device. DAWSense can do this by utilizing machine learning and taking advantage of surface acoustic wave technology. With different situations requiring varying methods of input, researchers are now exploring newer methods of human-computer interfacing. One of them is to embed the interface within everyday objects, thereby enhancing user experiences.

Human-machine interfaces may take many forms. For instance, the industry often uses microphones or cameras to control devices using methods like speech or gesture recognition. Although such systems may be of immense help in certain applications, they may not be practical for others. In a camera-based system, it is easy to obscure the arrangement by introducing objects in front of the camera. Similarly, microphone-based systems involving speech recognition may not function properly in noisy environments.

As an alternative, researchers were experimenting with transforming any arbitrary surface to act as an input device. For instance, for controlling a smart home, they have experimented with the arm of a couch acting as a TV remote, or an interactive wall. They have tried many methods for building such functionality so far, with accelerometers standing out as one of the most promising solutions, as they can sense touch gestures on various surfaces without any modifications on them.

However, the sampling bandwidth of accelerometers incorporated into a surface to act as a touch-sensing device is not enough to capture more than a few relatively coarse gestures. Now, a collaboration between researchers at the Meta Reality Labs and the University of Michigan has demonstrated another method that offers the necessary bandwidth for creating user interfaces that are more advanced.

The new method relies on SAWs or surface acoustic waves rather than mechanical vibrations for sensing touch inputs. The team has also fashioned a VPU or voice pick-up unit for detecting subtle touch gestures. They have designed the VPU to conduct the surface waves into a hermetically sealed chamber that contains the actual sensor. This practically removes any interference from background noise. As the team has fabricated each VPU using the MEMS process, the sensor has the necessary high bandwidth that is typically associated with a MEMS microphone.

Although the MEMS sensor was a high-performance one, the researchers still needed a method for converting the SAWs into swipes, taps, and other gestures. A hard-coded logic would fail to convert them satisfactorily, so the team had to design a machine-learning model with an algorithm to learn from the data.

VPUs typically collect a huge amount of data, and processing this data on an edge computing device in real-time would be a challenge. The researchers dealt with this problem by calculating Mel-Frequency Cepstral Coefficients, which helped in understanding the most informative features of the data. With this analysis, the researchers could reduce the number of features they needed to consider from 24,000 to just 128. They then fed the features into a Random Forest classifier for determining the exact representation of the surface waves.

FireBeetle Drives Artificial Internet of Things

The next generation of the FireBeetle 2 development board is now available. Targeting the IoT, especially the Artificial Intelligence of Things, it has an onboard camera. According to DFRobot, the creator, the FireBeetle boasts Bluetooth and Wi-Fi connectivity, and an Espressif ESP32-S3 module.

Built around the ESP32-S3-WROOM-1-N16R8 module, the main controller of the FireBeetle provides high performance. It operates with 16MB of flash RAM, along with 8MB of pseudo-static RAM or PSRAM that allows it to store more data. The ESP32-S3 chip provides acceleration for computing neural networks and processing signals for high workloads. This makes the FireBeetle ideal for many applications like image recognition, speech recognition, and many more.

DFRobot has designed the heart of the FireBeetle, the ESP32-S3, for edge AI and low-power tinyML work. With two CPU cores, the Tensilica Xtensa LX7, both operating at 240 MHz, the ESP32-S3 also offers vector processing extensions. The design specifically targets accelerated machine learning, including workloads of artificial intelligence. In addition to the 8MB PSRAM and the 16MB Flash memory, the board also has 384kB of flash and 512kB of on-chip SRAM.

The FireBeetle development board, along with its BLE or Bluetooth 5 Low Energy and Wi-Fi connectivity, also includes an onboard camera interface driven by a dedicated power supply circuit. The camera has a 2-megapixel sensor with a 68-degree FOV or Field of Vision. There is a GDI connector, which is useful for adding a TFT display.

DFRobot offers two variants of the FireBeetle development board. One of them is the standard version, namely the FireBeetle 2 ESP32-S3, containing a PCB antenna for wireless connectivity. The second variation is the FireBeetle 2 ESP32-S3-U, and it offers a connector for rigging up an external antenna. It is possible to program both boards from Arduino IDE, ESP-IDF, and MicroPython.

It is possible to order both development boards from the DFRobot website store, The second variant is the costlier of the two, and both come with volume discounts. Although both variants come with the board and camera, the pin headers are bundled loosely but not soldered. DFRobot has published a simple project for the FireBeetle—a camera-based monitor to oversee the growth of plants.

It is possible to use the FireBeetle development board to build a DIY plant growth recorder. It allows monitoring the entire growth process of the plant, starting from seeding right up to maturity, while tracking the environmental conditions throughout. This makes it possible to identify any changes easily that could affect the health and growth of the plant, along with any fluctuations in temperature, light levels, and humidity. This information helps to organize and optimize the growing conditions of the plant, thereby ensuring that the plants get everything they need for proper growth.

The project has a screen for displaying the various parameters it is monitoring. The camera periodically captures images of the plant as it grows, storing them in the board’s memory. The board transmits real-time images and environmental data over Wi-Fi or Bluetooth for regular viewing.

In-Circuit Monitors for Electronic Devices

During a chip’s lifetime, there can be a wide variety of issues cropping up. Engineers are using sensors that can address them. As the semiconductor ecosystem touches a wide application space, sensors, and in-circuit monitors are playing an increasing role in managing the silicon lifecycle, thereby improving its resiliency and reliability.

Engineers are expecting a drastic improvement in the reliability of electronic devices with the addition of these sensors and in-circuit monitors. These expectations are due to a combination of sensor placements in true system-level design, in- and on-chip monitors, and an improvement in data analysis.

In the future, with engineers placing more monitors and sensors at strategic locations for collecting data, the combination, and analysis of this data is likely to increase tremendously. In addition, this will lead to a much more detailed understanding of what goes wrong in real time in the life of a semiconductor. Important to note, this is likely to open the door to recovery schemes for keeping devices functioning until they are due for replacement or repair.

All of the above depends on the complexity of the product. Although some regulatory standards for miniaturization are under study, the complexity of the product drives the use of sensors and in-circuit monitors. With consumers wanting greater capabilities in their hands, the requirement is going to increase substantially.

Although users were not interested earlier in concepts like resilience, predictability, and observability, things are changing fast. Chip architects are paying more attention to how systems and devices behave over time, including issues such as silent data corruption. Where earlier, it was hard to articulate the business reasons for such inclusion, chip architects are realizing there are missing pieces. While it is still a tussle between the why and how much, the realization is dawning that it is impossible to have all the computing resources or complex monitors-on-chip that can tackle all scenarios. Especially when such additions need real estate and power to function.

Designers are beginning to realize that advanced design techniques, in conjunction with manufacturing complexities and the latest process nodes, are leading to new challenges. These challenges appear as variable power consumption and affect the useful life of the semiconductor. The power consumption pattern and performance characteristics of a chip change as it travels along the silicon value chain. The variation starts with the pre-silicon design, moving on to new-product bring-up, to system integration, and finally, to its in-field usage.

Monitoring the way a chip degrades over time, can throw light on many types of semiconductor failures, especially with BTI or bias temperature instability. Using in-circuit monitors, it is now possible to measure areas that show performance and power degradation, on-die temperature variations, and workload stress, and monitor die-to-die interconnects for heterogeneous designs. Mission-critical systems define specifications such as safety and reliability as the key differentiating parameters. Moreover, with device functionality degrading over time, it is necessary to evolve tests that include lifetime operation as well.

The industry is now widely adopting an approach that includes more and more sensors and in-circuit monitors for electronic devices to monitor the most prominent slack paths. 

3-D Printed Electronics

Today, 3-D printing is the most popular technology among all manufacturing and prototyping methods. However, 3-D printing is not new. In the 1980s, a company filed a patent for 3-D constructing models using stereolithography. Such patents have been instrumental in holding back the development, manufacture, and distribution of 3-D printing technology, until now.

3-D printing typically works by slicing a 3-D design into several small horizontal 2-D sections and then splicing them together by printing each 2-D slice atop the other. 3-D printers commonly use a thermoplastic wire wound on a reel. The printer extrudes this wire through a hot nozzle. There are 3-D printers that build models from paper. They cut out each layer from the paper, and glue one layer to the next. Other, more advanced systems sinter metallic dust using lasers.

It is possible to use 3-D printing technology for manufacturing electronic components. This uses a printer and an additive process. However, not all see the 20-D printed electronics as being actually 3-D printed. For instance, although they consider transistors as 2-D, in actual practice, they are 3-D, requiring both additive and subtractive processes to build up their insulating layers, source and gate terminals.

For now, there is little practical application for most 3-D printed electronics, and their use in the real world is rare. This is so because manufacturing electronics in the traditional manner is much easier, cheaper, and more reliable. Still, there is a significant amount of research for trying and creating practical devices with 3-D printing technology. So far, there has been significant success in printing transistors, capacitors, diodes, and resistors using 3-D processes.

Although electronic components may use several materials, 3-D printed devices generally use graphene or other organic polymers. Researchers use graphene, as it gives them the ability to create narrow channels and gates while allowing doping. It is easy to dispense organic polymers in solution form, which is ideal for using them in inkjet printers.

However, with printed electronic capabilities still far removed from standard electronic systems, it is rare to find commercial applications for printed electronics. However, there is plenty of research going into printing them.

Being still in their infancy, printed electronics are presently found only in research labs, or in prototypes. There are two technologies popular, tending towards practical—Pragmatic and Duke University.

A UK-based company, Pragmatic, produces printed electronic components for one-time applications. These are disposable electronic items like RFID tags. The most significant feature of Pragmatic devices is they use a flexible substrate. They cover all essential components like resistors, capacitors, and transistors. Although Pragmatic has not fully demonstrated a functional device, they have produced ARM core processes, claiming each device consumes 21 mW and energy efficiency of 1%.

Presenting the best examples of practical printed electronics, Duke University claims its products exceed the typical life cycle. They use a new method of additive processes for creating printed electronic components like resistors, capacitors, and transistors. Their components are mostly based on carbon, while the construction uses aerosol spraying similar to inkjet technology. They build the insulating layers from cellulose.

What is Pressure and How to Measure it?

The concept of pressure is simple—it is a force. Typically measured in psi or pounds per square inch, pressure is the force applied on a specific area. However, there are other ways of expressing pressure and different units of pressure measurement. It is important to understand the differences so that the user can apply specific measurements and units properly.

Depending on the application, there are several types of pressure. For instance, there is absolute pressure. Engineers define the zero point of absolute pressure as that occurring in a perfect vacuum, which is the case for some applications. Absolute pressure readings typically include the pressure of the media added to the pressure of the atmosphere. One can use the absolute pressure sensor to rely on a specific pressure range for reference while eliminating instances of varying atmospheric pressure. Thermodynamic equations and relations typically use absolute pressure.

Then there is gauge pressure, which indicates the difference between the pressure of the media and a reference. While the pressure of the media can be that of the gas or fluid in a container, the reference can be the local atmospheric pressure. For instance, the gauge for measuring tire pressure will read zero when disconnected from the tire. Which means, it will not read or register the atmospheric pressure. However, when connected to the tire, it can reveal the air pressure inside the tire.

Another type of pressure is the differential pressure. It is somewhat more complex compared to gauge or absolute pressure, as it is the difference in the pressures of two media. The gauge pressure can also be termed as a differential pressure sensor, as it measures the difference between the atmospheric and the media’s pressure. With a true differential pressure sensor, one can measure the difference between any two separate physical areas. For instance, by measuring the differential pressure, one can indicate the pressure drop or loss, from one side of a baffle to the other.

Compared to the above three, sealed pressure is less common. However, it is useful as a means of measurement. It measures the pressure of a media compared to a sample of atmospheric pressure that is sealed hermetically within a transducer. Exposing the pressure port of the sensor to the atmosphere will cause the transducer to indicate a reading close to zero. This is due to the presence of ambient atmospheric pressure on one side of the diaphragm and a fixed atmospheric pressure on the other. As they are nearly the same, the reading it indicates is close to zero. When they differ, the reading will be a net output other than zero.

The internal pressure can change due to differences in temperature. This may create errors exceeding the accuracy of the sensor. This is the main reason engineers use sealed sensors for measuring high pressures—the changes in the references cause only small errors that do not affect the readings much.

Engineers typically use several units when expressing measurements of pressure. They are easy to modify using the conventions of the International System of Units, even when they are not a part of that measurement system.