Improving Computer Vision with Two AI Processors

Computer vision is becoming a necessity in IoT and automotive applications. Engineers are trying for the next level in computer vision with two AI processors. They hope that two AI processors will help to make computer vision not only more efficient but also more functional.

One of the fastest-growing applications of artificial intelligence, computer vision is jostling for attention between prestigious fields like robotics and autonomous vehicles. In comparison to other artificial intelligence applications, computer vision has to rely more on the underlying hardware, where the underlying imaging systems and processing units overshadow the software performance.

Therefore, engineers are focusing on cutting-edge technology and state-of-the-art developments for the best vision hardware. Two companies, Intuitive and Syntiant, are now making headlines by supporting this move.

Israeli company, Intuitive, recently announced that its NU4000 edge Artificial Intelligence processor will be used by Fukushin Electronics in their new electric cart, POLCAR. The processor will allow the cart to have an integrated obstacle detection unit.

Requiring top performance and power efficiency when operating a sophisticated object detection unit in a battery-powered vehicle like an electric cart made Fukushin use the NU4000. The edge AI processor from Intuitive is a multicore System on a Chip or SoC that can support several on-chip applications. These include computer vision, simultaneous localization and mapping or SLAM, and 3d depth-sensing.

The NU4000 achieves these feats by integrating three Vector Cores that together provide 500 COPS. There is also a dedicated CNN processor offering three CPUs, 2 TOPS, a dedicated SLAM engine, and a dedicated depth processing engine. Intuitive has built this chip with a 12nm process, and it can connect up to two displays and six cameras with an LPDDR4 interface.

With a small form factor and low power consumption, the NU4000 is a powerful processor providing several key features that could make the obstacle detection unit a special application for Fukushin’s POLCAR.

California-based Syntiant was in the news with their Neural Decision Processor, the new NDP200. Syntiant has designed this processor for applications using ultra-low-power, especially useful for deep learning. With a copyrighted Syntiant core as its core architecture, it has an embedded processor, the ARM Cortex-M0. With this combination, the NDP200 achieves operating speeds up to 100 MHz.

Meant for running deep neural networks like RNNs and CNNs, Syntiant has optimized the NDP200, especially for power efficiency. Deep neural networks are necessary for computer vision applications.

Syntiant claims NDP200 performs vision processing at high inference accuracies. It does this while keeping the power consumption below 1 mW. Judging its performance, the chip could reach an inference acceleration of more than 6.4 GOP per second, while supporting more than 7 million parameters. This makes the NDP200 suitable for edge computing of larger networks.

Syntiant expects its chip will be suitable for battery-powered vision applications, such as security cameras and doorbells. In fact, the combination of the chip’s capability to run deep neural networks and power efficiency can allow it to take the next evolutionary step towards creating a better processor for computer vision applications.