Monthly Archives: March 2018

What are Depth Sensors?

Ocean going ships typically use depth sensing techniques mainly for locating underwater objects to prevent running into them. This included gauging the distance of the sea floor. The principle involves measuring the time a burst of sound directed into the water takes to return after reflecting off an object. This time of flight gives a measure of its distance from the source of the sound, as the speed of sound traveling in water is fairly constant, depending on the water density and its temperature.

With the advent of peizo electronic devices it was possible to use ultrasonic sound frequencies to measure distance, using the same principle of measuring the time of flight. As better electronic components improved, engineers used the same technique for measuring distances using light waves in place of sound, as using light waves resulted in greater measuring accuracy as well as the ability to measure smaller distances.

Smartphone manufacturers are using depth-sensing techniques to enable facial detection, recognition, and authentication in their devices. However, this technology has far more potential, as Qualcomm is demonstrating. In collaboration with Himax Technologies, Qualcomm is promoting its Spectra image signal processor technology along with a 3-D depth-sensing camera module for Android systems. Very soon, we will be witnessing the emergence of a depth-sensor ecosystem, complete with firmware and apps.

Himax has expertise in module integration, drivers, sensing, and wafer optics. Qualcomm has combined their Spectra imaging technology with the technology from Himax and created the SLiM depth sensor suitable for mobiles. This has ample applications in surveillance, automobiles, virtual reality, and augmented reality. It took more than four years for developing the 3-D sensing solution.

The camera module from Qualcomm senses depth in real-time, and simultaneously generates a 3-D point-cloud of data in both indoor and outdoor situations. Qualcomm expects smartphone manufacturers to begin incorporating the computer vision camera module in their products in the first quarter of 2018.

Using infrared light, the camera module uses the well-known time of flight technique based on speed of light for resolving the distance from an object. The camera projects dots of infrared light onto the object, creating a cloud of points, which the sensor reads for the time of fight, thereby gathering depth information.

Approaches based on depth sensing techniques are gradually moving towards mobile handsets and head-mounted displays. Although mobile platforms may not be able to supply adequate power for room-scale 3-D sensing, they are certainly capable of managing the power required by the sensor and the image signal processor for running the complex software necessary for translating the point-cloud into an interactive and useful input.

The sensor packages use sub-half-watt range active laser illumination for providing high-quality point-clouds for short distances with structured-light solutions for applications involving facial and gesture recognition. However, for serving longer distances such as applications involving room-scale sensing involving a sensing range of 2-10 meters, the sensor packages will have to use high power lasers in the 5-W range.

As the power requirements for longer ranges are beyond those available from average mobile phones, designers are forced to adopt purely camera-based approaches for applications involving longer-distance image recognition.

Let Raspberry Pi Automate those Snake Eyes

If you are looking for something to bring your cosplay masks, props, or other spooky sculptures to life for your robots, animatronics, or Halloween parties, you can use the snake eyes cowl as a pair of animated eyes. This is an accessory for operating two 128×128 pixel TFT LCD or OLED displays through a single board computer such as the Raspberry Pi (RBPi). It also has four analog sensor inputs.

The project started life as a project named Electronic Animated Eyes using the microcontroller Teensy 3.2. However, the author found the RBPi to be a better alternative as it offers some potential benefits, such as hardware-accelerated graphics, and includes antialiasing. With a faster CPU, dual SPI buses, and ample RAM, the RBPi offers faster frame rates. The RBPi does not require a preprocessing step to decode standard graphics formats such as SVG, PNG, and JPEG. The author has written the eye rendering code in a high-level language, Python, and that makes it easier to customize.

However, using RBPi for this project has some downsides as well. The RBPi usually takes a while to boot an operating system from an SD card. It also needs an explicit shutdown procedure. As the RBPi is large and uses more power, it is not very suitable for wearable applications. Moreover, the use of an SD card makes it less rugged.

The author recommends an RBPi model 2 or 3. Although the code runs fine on an RBPi Zero or another single-core RBPi board the performance will lag greatly. Make sure the RBPi board used for the project has a 40-oin GPIO header.

However, it is not necessary to connect both displays for the project, as a single eye can also produce a very creative effect. The author recommends OLED displays, as they have very wide viewing angle along with excellent contrast and color saturation. However, OLED is more expensive compared to TFT. TFTs are also acceptable as displays, although they may look somewhat washed out for this project. Users may need additional components if they plan on controlling the eyes with a joystick and buttons, and allowing them to react to light, rather than allowing them to run autonomously.

The author uses bonnet boards to wire up the breakout pins on each display board. The user must decide if the installation will be a temporary arrangement or a permanent one. Space for wiring may depend on the housing chosen for the installation, and these may influence the choice of connectors and wiring. Wiring has to be done carefully, following the instructions to avoid disappointment.

Preferably, solder a header at each end, and plug all the wires through. This is easier and less error-prone. Keeping the wiring short and tidy from the bonnet to display, ensures the display gets a clean signal, as electrical interference may lead to glitches in the animation.

Start the project by downloading the latest version of the Raspbian Lite operating system, and transfer it to an SD card of 2 Gb or larger size. Follow instructions here.

What are Stepper Motors Good For?

Stepper motors rotate in discrete steps. These are DC motors with multiple coils arranged in groups or phases. Energizing each phase sequentially enables the shaft of the motor to rotate in single steps. It is possible to achieve very fine positioning and speed control with a computer controlling the stepping. This allows use of stepper motors for several industrial applications involving precision motion control. As stepper motors come in various sizes, styles, and electrical characteristics, it is important to know the parameters that allow selecting the right motor for the job.

Stepper motors are good for three things—positioning, speed control, and generating low-speed torque. As they move in repeatable and precise steps, stepper motors are appropriate for applications requiring meticulous positioning such as in 3-D printers, XY plotters, CNC machines, and camera platforms. With their precise incremental movement, stepper motors allow excellent control of their rotational speed suitable for robotics and process automation. Where regular DC motors generate very little torque at low speeds, stepper motors are the opposite, generating their maximum torque at low speeds. This makes then the right choice for applications requiring high precision at low speeds.

It is also necessary to know the limitations of stepper motors—low efficiency, limited high-speed torque, and no feedback. Stepper motors are notoriously low efficiency devices, as their current consumption is independent of the load they are driving. Moreover, when it is stationary and not doing work, a stepper motor draws the maximum current. The low efficiency of these motors manifests itself in the high amount of heat they generate. Contrary to that of other motors, stepper motors exhibit lower torque at high speeds than they do at low speeds. Even for steppers optimized for better high-speed operation, achieving that requires them to be paired with appropriated drivers. Servomotors achieve their positions aided by integral feedback. However, steppers have no such provision, achieving high precision when running open loop. Limit switches or home detectors are necessary for safety and for achieving a reference position.

Selecting a stepper motor for a specific task requires considering three major characteristics—motor size, step count, and gearing. The general concept is larger motors will deliver higher power. Manufacturers specify motor power in torque ratings, and NEMA numbers to specify their frame sizes. To decide whether the motor has the strength to meet your requirement, look at its torque ratings. While NEMA 57 is a monster size, 3-D printers and CNC mills usually use a NEMA 17 size motor. The NEMA numbers also specify standardized faceplate dimensions for mounting the motor.

The step count defines the positioning resolution. A motor can have a specific number of steps per revolution, which usually ranges from 4-400. For instance, step counts commonly available are 24, 48, and 200. Resolution of a stepper motor is specified in degrees per step. For instance, a motor rotating 1.8 degrees per step is actually rotating at 200 steps per revolution. A higher resolution motor usually sacrifices speed and torque. Therefore, motors with high step counts have lower RPMs and lower torques than do similar sized but low-step-count motors running at similar speeds.

Working with Gas Sensors and the Raspberry Pi

Many devices predicted by earlier science fiction stories and movies have come true. Among them are gas detectors as envisaged by the TV series Star Trek. If you have a single board computer such as the Raspberry Pi (RBPi), you can use it to detect the type of gas and air quality around you. Of course, you will need to couple the RBPI with a gas sensor, and among the popular gas sensors available are the BME680 from Bosch, and the CCS811 from AMS.

Gas sensors are helpful in sniffing out volatile organic compounds, many of them not only poisonous but also flammable. Volatile organic compounds may be natural or manmade, including paints and coatings that require solvents to spread in a protective or decorative coating. Where earlier the paint and coating industry used toxic chemicals, they are now shifting towards aqueous solvents. Natural volatile organic compounds may come from direct use of fossil fuels such as gasoline or as indirect byproduct such as automobile exhaust gas.

Some volatile organic compounds may also be carcinogenic to humans. Among them are chemicals such as benzene, methylene chloride, perchloroethylene, MTBE, Formaldehyde, and more.

BME680

Bosch developed this tiny sensor BME680 specifically for applications involving mobiles and wearables that require low power consumption. This one sensor has high linearity, and measures temperature, humidity, pressure, and gas with high accuracy. This 8-pin LGA package is only 3 X 3 X 0.95 mm, and Bosch has optimized its power consumption based on the specific operating mode.

With high EMC robustness and long-term stability, the BME680 measures indoor air quality, while detecting a broad range of gases and volatile organic compounds. For instance, the BME680 can detect formaldehyde from paints, and other volatile organic compounds from paint strippers, lacquers, furnishings, cleaning supplies, glues, office equipment, alcohol, and adhesives.

Apart from applications for indoor air quality measurement, BME680 is also useful for applications such as personalized weather station, measuring skin moisture, detecting change in rooms, monitoring fitness, warning for dryness or high temperatures, measuring volume and air flow, altitude tracking, and more.

CCS811

Compared to the BME680, the CCS811 is only a digital gas sensor. It is meant for monitoring indoor air quality using a metal oxide gas sensor. The gas sensor can detect a wide range of volatile organic compounds. The CCS811 includes a micro-controller unit, an analog to digital converter, and an I2C interface.

With optimized low-power modes, AMS has designed the CCS811 for high volume and reliability. It has a tiny form-factor that saves more than 60% in PCB footprint, while producing stable and predictable behavior regardless of air quality at power up.

Similar to the BME680, the CCS811 also measures the total volatile organic compounds and the equivalent of calculated carbon di oxide. However, the consumption of CCS811 being about 60 mW, it may be necessary to have to supply it with an external supply of 3.3V.

Both sensors need the working I2C bus on the RBPi to interface and function. The software library for the two sensors are available here for the BME680 and here for the CCS811.

Facial and Object Recognition with A Raspberry Pi

f you are using the single board computer Raspberry Pi (RBPi) for vision-related tasks such as facial and object recognition, the NCS or Movidius Neural Compute Stick from Intel could help to boost the rate at which the RBPi carries out its tasks—you actually do not need to employ a server farm for the job.

The RBPi is fully capable of running software for facial image recognition, and hobbyists have long being using the SBC for recognizing faces in videos to identifying obstacles in the path of a robot. However, the rate at which the RBPi carries out such tasks leaves much to be desired, and the NCS helps to improve this rate.

The Movidius NCS from Intel plugs into the RBPi via the USB port. Inside the stick is a Myriad 2 Vision Processing Unit (VDU) with 12 specialized cores that accelerate the vision recognition tasks for the RBPi. Although it consumes only a single watt of power, the low-power VDU processor works at 100 gigaflops. Sometimes, the stick may need higher processing power and it could consume 2.5 W.

Users can watch the video Movidius has released for guidance on how to use the NCS. There is also a text guide to help users figure out the nuances of object recognition using the RBPi and the NCS. The video demonstrates the system recognizing a pair of sunglasses and a computer mouse on the table.

To get the demo running, the user needs to download and install a few software libraries. On the hardware side, apart from the RBPI, you also need a Pi camera.

Movidius initially announced the early version of the NCS in April last. They then released a prototype device, which they named Fathom, before Intel purchased Movidius. According to Dr. Yann LeCun, founding father of Convolutional Neural Networks, and director of AI research at Facebook, Fathom was a significant step forward.

Intel then released NCS, which has broadly the same specifications as the Fathom did, with the exception that the former has a 4 GB memory. This is an improvement of four times over that of the latter, and it helps the NCS to support denser neural networks. With NCS, any robot, big or small, can possess vision capabilities that are state-of-the-art.

According to Intel, the NCS can lower the barriers for those starting with deep learning application development. It actually offers a simple way for users to add a visual recognition system to their prototype devices such as robots, surveillance cameras, and drones.

As the NCS already has 4 GB of internal memory, and handles all the data in a neural network that is locally stored, the NCS does not have to rely on an Internet connection to connect to a server. In actual practice, transferring data to and from a remote server would introduce a huge latency and any high-performing processor to overcome the latency would consume a huge amount of power. The NCS overcomes both the above shortcomings.

The processor on the NCS is more powerful than the RBPi, although it does not actually accelerate the training process of a neural network, which is a computationally intensive process when carrying out vision recognition.

The OpenBCI Cyton Board

OpenBCI stands for Open Brain Computer Interface. According to OpenBCI, they prefer advancements in science to be made only through open forums with concerted efforts and sharing of knowledge by people having different backgrounds. OpenBCI claims to work towards harnessing the power of the open source movement for accelerating ethical innovations of technologies involving the human-computer interface.

OpenBCI offers high quality, but low-cost bio-sensing hardware for interfacing between the human brain and a computer. Their bio-sensing boards are Arduino compatible, providing high-resolution imaging for EEG, ECG, and EMG signals, while recording them. OpenBCI claims hobbyists, makers, and researchers in over 60 plus countries use their BCI devices to interface brain and computer. Applications of BCI devices include powering machines and mapping brain activity. Anyone interested in brain computer interfacing, neurofeedback, and bio sensing can purchase equipment such as electrodes, sensors, boards, and headsets from OpenBCI. The equipment is affordable and of high quality.

Even if you are only curious about brain computer interfacing, or a new entrant to this field, to start with you need a bio-sensing board from OpenBCI. Select from three types of boards on offer—the Cyton, the Cyton+ Daisy, and the Ganglion. The difference between these boards lies in the number of electrodes they can handle—additional channels allow greater spatial resolution for diversity in research.

The Ganglion board offers four channels, each sampling at the rate of 200 Hz. The Cyton board has eight channels with a sampling rate of 250 Hz each, while the Cyton+ Daisy allows 16 channels at sample rates of 125 Hz. As each channel allows plugging in only one electrode, larger the number of channels so many more electrodes you can use. A Bluetooth dongle compatible with the Ganglion board allows easy connection to a Windows or Linux computer. The Cyton board is directly compatible to Mac computers and a Bluetooth dongle is not necessary.

As the sample rate of the board connected via the Bluetooth dongle depends on the bandwidth of the dongle, for increased sample rates, OpenBCI recommends the use of their WiFi Shield, which transfers data over Wi-Fi and hence is faster than Bluetooth. Users can control the WiFi Shield through requests over HTTP, allowing sending JSON objects with data in nano volts.

Once you have the board, it is necessary to get a set of electrodes or a headset. As the boards come with male header connectors, electrodes with compatible female headers are necessary. For instance, for EMG or ECG, OpenBCI offers EMG/ECG Snap Electrode Cables with matching Solid Gel Foam Electrodes.

The user can plug in these electrodes directly to bio-sensing board and they are ready to use. Another set of electrodes from OpenBCI, the Gold Cup Electrodes, handles EEG signals that include the EMG and ECG. However, it needs the Ten 20 paste to operate. Attachment to the body is very simple, requiring affixing the electrodes with medical tape. Users can connect their own electrodes as well.

For attaching electrodes to the scalp easily and without using any paste, OpenBCI offers their Mark IV headset, which is a frame with dry electrodes. The headset allows easy monitoring of EEG signals from the brain.