Monthly Archives: August 2014

Meet Bob – the Security Guard Robot

Although security guards are deployed in many places that people visit regularly, it is highly unlikely that one will recall where he or she saw a specific security guard on a particular day. That is because we do not pay much attention to the guards on duty. However, it is different with Bob, and you cannot but look at him, remember him and recall him to your friends later.

That is because Bob is a goofy looking security guard and a robot. He or rather it is an autonomous robot, based on the MetraLabs robot “Scitos A5,” programmed by the University of Birmingham and Bob runs on Linux.

Actually, Bob is on a three-week trial run at the Gloucestershire headquarters of the UK-based security firm GS4 Technology. The School of Computer Science, at the University of Birmingham, designed the robot they named as Bob. GS4 is evaluating Bob’s performance as a trainee security officer. The University of Birmingham is hosting the project STRANDS with an aim of using robots in a more versatile way in the workplace and Bob is a part of the $12.2million project.

Bob is built on the lines of the Germany-based MetroLabs Scitos A5 robot. If you have seen the Softbank Pepper robot made by Aldeberan, Bob looks much like an armless, stripped down version – even the built-in tablet display is present. The difference between the two is in their programming. Pepper can read and respond to human emotions, while Bob is trained to notice changes in a given environment.

With built-in scanners and 3D cameras, Bob can build a map of its patrol area. Bob, being a mobile robot, can identify objects and autonomously maneuver around them. If it finds its batteries are running low on energy, Bob reports to its docking station for charging them. According to GS4, the security robot is programmed with activity recognition algorithms. Therefore, it is able to detect movement of people, observe and draw conclusions about the changes occurring in the environment over time. For example, Bob can identify when and where objects disappear or reappear, detect whether fire doors are closed or open and identify where people can go.

Bob is unarmed and carries no weapons. Therefore, it cannot apprehend a thief in the act. However, Bob can speak and contact human guards for assistance. Typically, human security officers have a very wide range of different tasks that they carry out. They may have to react to fast changing unpredictable events that require on-the-spot decisions. Although the robot security guard of the STRANDS project will not be able to replace a human, it can support the security team as an additional patrolling resource. It can carry out frequent routine checks, highlighting abnormal situations that require the security teams to respond.

The Scitos A5 from MetraLabs sells primarily as a mobile service robot. It is used for exhibition booth and point-of-sales applications. Typically, the Scitos robots run on Fedora Linux with SELinux extensions, whereas Bob runs on Ubuntu Linux. The interface consists of a 15-inch, 1024×768 touchscreen, dual loudspeakers, microphone and 32 LEDs to provide feedback signals.

Use your mobile device’s headset port for data acquisition

Today, almost all of us use mobile phones every day and we depend on them for many features that personal computers offered earlier. The major advantages of mobile phones is their mobility, compact form-factor, always networked and untethered to power (except when charging). Moreover, mobile phones are now platforms that support continuous sensing applications.

Although mobile phones nowadays house many sensors such as imagers, gyroscopes and accelerometers, some sensors such as soil moisture, air quality and EKG have not been integrated yet. Many people desire support for such sensors and prefer a limited set of direct-connecting interfaces that make it suitable to power external peripherals for transferring data to and from them. This has resulted into a search for a universal peripheral interface port.

Every mobile phone has a headset port, which is almost standardized. Users can connect physically and electrically a vast range of hands-free and headphone audio devices. Therefore, the mobile phone’s headset port is a suitable candidate for such a peripheral interface. Recently introduced peripherals show that designers and manufacturers have a growing interest in using the mobile phone’s headset port for more than just headsets.

Transferring power and data to peripheral devices via the headset port looks an attractive proposition when considering the cost, simplicity and the ubiquity involved in the process. However, different mobile phones show considerable variance in their power delivery ability, microphone bias voltage and passband characteristics among their headset ports.

Therefore, contrary to recent claims, one is forced to conclude that the headset port is not as universal as it is made out to be. For example, peripherals designed to work with iPhones may fail on other Windows or Android phones and vice versa. Moreover, designs for smartphones may not be suitable for less capable feature phones. Therefore, mobile phone peripherals may have a hard time working with the headset ports of different mobile phones.

A new platform, called the AudioDAQ, makes it easier to acquire data continually via the headset port of a mobile phone. Unlike existing phone peripheral interfaces such as HiJack, AudioDAQ draws all the necessary power from the bias voltage of the microphone. It encodes all data as analog audio while taking advantage of the voice-memo application built into the phone for continuously collecting data.

Therefore, AudioDAQ is not limited only to iOS devices, but works smoothly on smartphones and feature phones as well – no hardware modification is required on the phone. Compared to HiJack, AudioDAQ has extended sampling periods, which is a result of using a power-efficient analog solution, making it suitable for a large class of sensing applications.

The efficient AudioDAQ design draws all its necessary power from the microphone bias voltage. Since this voltage is present on all phones, irrespective of whether it is a smartphone, feature phone, Android or iOS phone. Moreover, the voice memo application is present in almost all mobile phones. That makes AudioDAQ almost universal in its application. Designers of AudioDAQ have demonstrated the viability of their architecture by and end-to-end system that captures EKG signals continuously for several hours and sends the collected data to a cloud for storage, further processing and visualization.

Raspberry Pi Lights up an RGB LED Matrix Panel

Colorful LED screens are a joy to watch. Bright LEDs making up a 16×32 display are not only easy-to-use, but also low cost – you may have seen such displays in the Times Square. Controlling such a display is simple if you use the low-cost, versatile, credit card sized single board computer, the Raspberry Pi or RBPi. Although the wiring is simple, the display is quite demanding of power when displaying.

The items you need for this project are a 16×32 RGB LED Matrix Panel, Female-to-Female jumper wires, Male-to-Male jumper wires, a 2.1mm to Screw Jack Adapter, an RBPi board and a 5V 2A power supply. Use the Female-to-Female jumper wires to connect the display to the GPIO connector pins of the RBPi. Although this connection is display specific, following a generic pattern is helpful:

GND on display to GND on the RBPi (blue or black)
OE on display to GPIO 2 on the RBPi (brown)
CLK on display to GPIO 3 on the RBPi (orange)
LAT on display to GPIO 4 on the RBPi (yellow)
A on display to GPIO 7 on the RBPi (yellow or white)
B on display to GPIO 8 on the RBPi (yellow or white)
C on display to GPIO 9 on the RBPi (yellow or white)
R1 on display to GPIO 17 on the RBPi (red)
G1 on display to GPIO 18 on the RBPi (green)
B1 on display to GPIO 22 on the RBPi (blue)
R2 on display to GPIO 23 on the RBPi (red)
G2 on display to GPIO 24 on the RBPi (green)
B2 on display to GPIO 25 on the RBPi (blue)

When connecting the wires, ensure that both the display and the RBPi are powered off, as the display is able to pull some power from the GPIO pins. Once all the data pins are connected as above, it is time for the power supply to be connected. The panel has a power supply header and a cable that has two red wires for the positive supply and two black wires for the negative. While connecting these wires to the screw jack adapter, make sure of maintaining proper polarity. Additionally, double check that the power supply you are using is rated for 5V, as any other higher voltage is likely to fry the display. The sequence for powering up must be the display first and the RBPi last.

To display an image or a message, you must convert it to a ppm or portable pixel format. Image editors can do this for you and you can very well use the free open source application GIMP. Once the image is in the required format and placed in the specific directory, the display program picks it up and it appears on the display. Shift registers on the back of the display module help with the shifting or scrolling of the image on the display. Of course, the RBPi has also to do a lot of work in bit-banging the pixels onto the screen.

You may use the code as it is in C, or you may prefer to use Python. Currently, the program displays only eight colors; for reference, see here.

What is a 3D Tablet?

We hear so much of 3D today that we are no longer surprised with 3D printing, 3D movies, 3D gaming consoles, 3D TV sets etc. Therefore, 3D tablets ought not to come as a surprise. When we live in a 3D environment, it is no wonder that we try to capture it in 3D. Therefore, very soon we will have 3D mobile devices that will not only display 3D movies and games, but also record videos and pictures in full 3D.

In the past three weeks, Google set the whole news world abuzz by announcing their Project Tango and you can see them working with NASA here. The goal of the Project Tango is to let a mobile device sense space and movement in a way similar to what humans do. Google released their first prototype in February 2014 – an Android smartphone with a five-inch screen. The smartphone uses special software for tracking the movements of the device entirely in 3D motion. Measuring over 250-thousand 3D movements every second, it creates a virtual space model of the user’s environment.

Wall Street Journal reports that Google is on its way to building a first-generation 3D tablet as a part of the Project Tango. According to the report, apart from the usual sensors that are present on current tablets, the 3D tablet will have additional and advanced vision sensors such as sophisticated 3D cameras and infrared depth sensors including dedicated software.

The report suggests that Google may well be producing about 4,000 such units of seven-inch tablets for presenting at their annual developer’s conference. Possibilities are the tablet could have the ability to create accurate virtual worlds from real-world environments. This would be similar to mockup sets, and would be of great assistance to movie producers and game developers, as it could cut down on digitizing time.

The report also conjectures that the Movidius Vision Processor, also known as Myriad 1, would power the new tablet – this can map space and motion in real time and with detailed accuracy and precision. Myriad 1 is specifically designed to handle these tasks, and Movidius has a set of tools for developers planning to implement 3D solutions quickly.

Very soon, users will be able to experience the new way of how a mobile device can be used to experience the world and Google is paving the future direction that smart mobile vision systems are expected to take.

Although 3D technology is nothing new, the potential of commercialization by a company such as Google, at an affordable price, is the transforming point in its adoption. As such, phones and consumer tablets running on the Android Operating System are already highly popular. Google, by developing this really compelling technology, is helping fields such as medicine, real estate, engineering and automotive, which make heavy use of video and imaging.

As many of these fields already make extensive use of imaging technology, the next stage will open up huge vistas for them. Imagine doctors and researchers able to see and understand human health in an entirely new way. Other uses can be very diverse such as a property inspection by a prospective buyer or examination of a road project.