Tag Archives: AI

What are Olfactory Sensors?

We depend on our five senses to help us understand the world around us. Each of the five senses—touch, sight, smell, hearing, and taste—contributes individual information to our brains, which then combines them to create a better understanding of our environment.

Today, with the help of technology like ML, or machine learning, and AI, or Artificial Intelligence, we can make complex decisions with ease. ML and AI also empower machines to better understand their surroundings. Equipping them with sensors only augments their information-gathering capabilities.

So far, most sensory devices, like proximity and light-based ones, remain limited as they need clear physical contact or line of sight to function correctly. However, with today’s technology trending towards higher complexity, it is difficult to rely solely on simple sensing technology.

Olfaction, or the sense of smell, functions by chemically analyzing low concentrations of molecules suspended in the air. The biological nose has receptors for this activity, which, on encountering these molecules, transmit signals to the parts of the brain that are responsible for the detection of smell. A higher concentration of receptors means higher olfaction sensitivity, and this varies between species. For instance, compared to the human nose, a dog’s nose is far more sensitive, allowing a dog to identify chemical compounds that humans cannot notice.

Humans have recognized this superior olfactory ability in dogs and put it to various tasks. One advantage of olfaction over that of sight is the former does not rely on line-of-sight for detection. It is possible to detect odors from unseen objects, which may be obscured, hidden from sight, or simply not visible. That means the olfactory sensor technology can work without requiring invasive procedures. That makes olfactory sensors ideally suited for a range of applications.

With advanced technology, scientists have developed artificial smell sensors to mimic this extraordinary natural ability. The sensors can analyze chemical signatures in the air, and thereby unlock newer levels of safety, efficiency, and early detection in places like the doctor’s office, factory floors, and airports.

The healthcare industry holds the most exciting applications for olfactory sensors. This is because medical technology depends on early diagnosis to provide the most effective clinical outcomes to patients. Conditions like diabetes and cancer cause detectable olfactory changes in the body’s chemistry. Using olfactory sensors to detect the changes in body odor, with their non-invasive nature, provides a critical early diagnosis that can significantly improve the chances of effective treatment and recovery.

The industry is also adopting olfactory sensors. Industrial processes often produce hazardous byproducts. With olfactory sensors around, it is easy to monitor chemical conditions in the air and highlight the buildup of harmful gases that can be dangerous beyond a certain level.

As the sense of smell does not require physical contact, it is ideal for detection in large spaces. For instance, olfactory sensors are ideal for airport security, where they can collect information about passengers and their belongings as they pass by. All they need is a database of chemical signatures along with processing power to analyze many samples in real-time.

Modular Machine Vision

As the AI or Artificial Intelligence scenario changes, in most cases, too fast, industrial vision systems must follow suit. These involve the automated quality inspection systems of today and the autonomous robots of the future.

Whether you are an OEM or Original Equipment Manufacturer, a systems integrator, or a factory operator, trying to get the maximum performance out of a machine vision system requires future-proofing your platform. This is necessary so that you can overcome the anxiety of having launched a design only months or weeks before the introduction of the next game-changing architecture or AI algorithm.

Traditionally, the industrial machine vision system is made up of an optical sensor like a camera, lighting for illuminating the area to be captured, a controller or a host PC, and a frame grabber. In this chain, the frame grabber is of particular interest. This device captures still images at a higher resolution than the camera can. High-resolution images simplify the analysis, whether by computer vision algorithms or by AI or artificial intelligence.

The optical sensor or camera connects directly to the frame grabber over specific interfaces. The frame grabber is typically a slot card plugged into the vision platform or PC. It communicates with the host over a PCI Express bus.

Apart from its ability to capture high-resolution images, the frame grabber also has the ability to trigger and synchronize multiple cameras simultaneously. It can also perform local image processing, including color corrections, as soon as it has captured a still shot. While eliminating latency, it also eliminates the cost of transmitting images to the cloud for preprocessing, while freeing the host processor for running inferencing algorithms, executing corresponding control functions, and other tasks like turning off lights and conveyor belts.

Although the above architecture makes the arrangement more complex than some newer types that integrate various subsystems in the chain, it is much more scalable. It also provides a higher degree of flexibility, as the amount of image-processing performance achieved is limited only by the number of slots available in the host PC.

However, machine vision systems relying on high-resolution image sensors and multiple cameras can face a problem with system bandwidth. For instance, a 4MP camera needs a throughput of about 24 Mbps. PCIe 3.0 interconnects offer roughly 1 Gbps per lane data rate.

On the other hand, Gen4 PCIe interfaces double this bandwidth to almost 2 Gbps per lane. Therefore, you can connect twice as many video channels on your platform without making any other sacrifices.

However, multiple camera systems ingesting multiple streams can consume bandwidth rather quickly. Suppose you are adding one or more FPGA acceleration or GPU cards for higher accuracy, and low latency AI or executing computer vision algorithms. In that case, you have a potential bandwidth bottleneck on your hands.

Therefore, many industrial machine vision integrators make tradeoffs. They may add more host CPU to accommodate the shortage of bandwidth, use a backplane-based system to make the accelerating cards play a bigger role, or change over to a host PC with integrated accelerators. Regardless, the arrangement adds significant cost and increases power consumption and thermal dissipation. Modularizing your system architecture can safeguard against this.

Efficiency and Performance of Edge Artificial Intelligence

Artificial Intelligence or AI is a very common phrase nowadays. We encounter AI in smart home systems, in intelligent machines we operate, in the cars we drive, or even on the factory floor, where machines learn from their environments and can eventually operate with as little human intervention as possible. However, for the above cases to be successful, it was necessary for computing technology to develop to the extent that the user could decentralize it to the point in the network where the system generates data—typically known as the edge.

Edge artificial intelligence or edge AI makes it possible to process data with low latency and at low power. This is essential, as a huge array of sensors and smart components forming the building blocks of modern intelligent systems can typically generate copious amounts of data.

The above makes it imperative to measure the performance of the edge AI deployment to optimize its advantages. To gauge the performance of the edge AI model requires specific benchmarks that can indicate its performance based on standardized tests. However, there are nuances in edge AI applications, as the application itself often influences the configuration and design of the processor. Such distinctions often prevent using generalized performance parameters.

In contrast with data centers, a multitude of factors constraint the deployment of edge AI. Among them, the primary factors are its physical size and power consumption. For instance, the automotive sector is witnessing a huge increase in electric vehicles with a host of sensors and processors for autonomous driving. Manufacturers are implementing them within the limited capacity of the battery supply of the vehicle. In such cases, power efficiency parameters take precedence.

In another application, such as home automation, the dominant constraint is the physical size of the components. The design of AI chips, therefore, must use these restrictions as guidelines, with the corresponding benchmarks reflecting the adherence to these guidelines.

Apart from power consumption and size constraints, the deployment of the machine learning model will also determine the application of the processor. Therefore, this can impose specific requirements when analyzing its performance. For instance, benchmarks for a chip in a factory utilizing IoT for detecting objects will be different from a chip for speech recognition. Therefore, estimating edge AI performance requires developing specific benchmarking parameters that showcase real-world use cases.

For instance, in a typical modern automotive application, sensors like computer vision, LiDAR, etc., generate the data that the AI model must process. In a single consumer vehicle fitted with an autonomous driving system, this can easily amount to generating two to three terabytes of data per week. The AI model must process this huge amount of data in real-time, and provide outputs like street sign detection, pedestrian detection, vehicle detection, and so on. The volume of data the sensors produce depends on the complexity of the autonomous driving system, and in turn, determines the size and processing power of the AI core. The power consumption of the onboard AI system depends on the quality of the model, and the manner in which it pre-processes the data.

The Law, Big Data, and Artificial Intelligence

We use a lot of electronic gadgets in our lives, revel in Artificial Intelligence, and welcome the presence of robots. This trend is likely to increase in the future, as we continue to allow them to make many decisions about our lives.

For long, it has been a common practice using computer algorithms for assessing insurance and credit scoring among other things. Often people using these algorithms do not understand the principles involved, and depend on the computer’s decision with no questions asked.

With increasing use of machine learning and predictive modeling becoming more sophisticated in the near future, complex algorithm based decision-making is likely to intrude into every field. As such, expectedly, individuals in the future will have further reduced understanding of the complex web of decision-making they are likely to be subjected to when applying for employment, healthcare, or finance. However, there is also a resistance building up against the above, mainly in the EU, as two Oxford researchers are finding out from their understanding of a law expected to come into force in 2018.

With increasing number of corporations misusing data, the government is mulling the General Data Protection Regulation (GDPR), for imposing severe fines on these corporations. GDPR also contains a clause entitling citizens to have any machine-driven decision processes explained to them.

GDPR also codifies the ‘right to be forgotten’ while regulating the overseas transfer of private data of an EU citizen. Although this has been much talked about, not many are aware of two other clauses within GDPR.

The researchers feel the two clauses may heavily affect rollout of AI and machine learning technology. According to a report by Seth Flaxman of the Department of Statistics at the University of Oxford and Bryce Goodman of the Oxford Internet Institute, the two clauses may even potentially illegalize most of what is already happening involving personal data.

For instance, Article 22 allows individuals to retain the right not to be subject to a decision based solely on automatic processing, as these may produce legal complications concerning them or affect them significantly.

Organizations carrying out this type of activity use several escape clauses. For instance, one clause advocates use of automatic profiling—in theory covering any type of algorithmic or AI-driven profiling—provided they have the explicit consent of the individual. However, this brings up questions whether insurance companies, banks, and other financial institutions will restrict the individual’s application for credit or insurance, simply because they have consented. This can clearly have significant effect on an individual, if the institutes turn him or her down.

According to article 13, the individual has the right to a meaningful explanation of the logic involved. However, organizations often treat the inner working of their AI systems and machine learning a closely guarded secret—even when they are specifically designed to work with the private data of an individual. After January 2018, this may change for organizations intending to apply the algorithms to the data of EU citizens.

This means proponents of the machine learning and AI revolution will need to address certain issues in the near future.