Category Archives: Robotics

How Sensors help Seniors Live Independently

With the benefits of medical science and increased awareness, people are now living longer than their ancestors did. Along with longer living, they also desire to live as independently as possible in their senior years. However, certain risks are part of independent lifestyles. These include inadequate care resulting in deteriorating health and debilitating falls. Researchers are addressing these issues by developing smart homes. They are using sensors and other devices and technologies for enhancing the safety of residents while monitoring their health conditions.

In-home sensors permit unobtrusive monitoring of individuals. That offers enormous potential for providing timely interventions and for improving the health trajectory, because health problems can be detected early, before they become more serious. Therefore, individuals are assured of continued high functional ability, independence with better health outcomes.

University of Missouri has an ongoing project in HAS or Health Alert Systems using sensor technology. They are testing HAS in senior housing in Cedar Falls, Iowa and in Columbia, Mo. They presently use motion sensors to monitor activity, acoustic and vision sensors for fall detection, Kinetic depth images for gait analysis and webcams for silhouette images. They have a new hydraulic bed sensor to capture quantitative restlessness, respiration and pulse. HAS also uses pattern recognition algorithms for detecting pattern changes in the data collected by sensors. Based on this, HAS can generate health alerts and forward them to clinicians, who diagnose them further to determine appropriate intervention.

Researchers at the university are evaluating the usability and effectiveness of HAS for managing chronic heath conditions. They are presently testing the HAS at remote sites, away from healthcare providers. Researchers expect this approach will provide important information on ways to scale up the system into other settings. According to the researchers, the next big step will be to move the system into independent housing where most seniors prefer to be. This will also offer significant potential healthcare cost savings, enabling seniors to live independently.

This research will improve the health care and the quality of life for older adults. Researchers are focusing on newer approaches for assisting health care providers in identifying potential health problems early. This will offer a model in eldercare technology, which will keep seniors independent while at the same time, reducing healthcare expenses. The project also has a plan – It will train the next generation of researchers in handling real, cyber-physical systems. It will mentor students through an interdisciplinary team, while the research outcomes are integrated into the classroom teachings.

Similar efforts are also under research in other places. For example, researchers at the Intel Labs, Carnegie Mellon University in Pittsburgh, are working on ways of taking out the drudgery involved in housework. They are presently designing HERB or Home Exploring Robotic Butler, a smart and resourceful robot. According to the researchers, HERB will be able to walk into a room, assess its layout and move about by itself.

Researchers at Intel Labs believe disabled and senior citizens will adopt robot butlers early on, as they most need help around the house.

Farming With Drones & Robots

According to Heidi Johnson, crops and soil agent for Dane County, Wisconsin, “Farmers are the ultimate “innovative tinkerers”.” Farming, through the ages, has undergone vast changes. Although in developing worlds, you will still find stereotype farmers planting his seeds and praying for rain and good weather while waiting for his crops to grow, farm technology has progressed. Therefore, we now have twenty-four hour farming and driverless combines and autonomous tractors have moved out of agro-science fiction. Farmers now are good at developing things that are close to what they need.

For example, the Farm Tech Days Show has farmers discussing technology ranging from the latest sensors to cloud processing for optimizing their yield and robotics that can improve manual tasks. Most farmers are already aware of data analytics, cloud services, molecular science, robotics, drones and climate change among other technological jargon. The latest buzz in the agricultural sector is about managing farms that are not a single field, but distributed in multiple small units. This requires advanced mapping and GPS for tailoring daily activities such as the amount of water and fertilizer that each plant needs.

That naturally leads to observation, measurements and responding in real time. Because such precision farming means technological backup, with data being the crux of the issue to respond to what is actually happening in the field. A farmer would always like to know when his plants are suffering and the cause of their suffering.

For example, farmers want sensors that can tell them about the nutrient levels in the soil at a more granular level – potassium, phosphorus and nitrogen, etc. They also want to know how fast the plant is taking up such nutrients – the flow rate. This information must come in real time from sensors and there must be diagnostic tools to make sense of the data.

Although NIFA, the National Institute of Food and Agriculture were talking about the Internet of Ag Things, the concept is not new to farmers. In fact, farmers are already collecting information from both air and ground. They are doing this by flying drones, inserting moisture sensors into ground and placing crop sensors in machines when spraying and applying fertilizers.

Presently, what farmers are lacking is a cost effective, adequate broadband connection. Although Internet connectivity exists even in remote areas, thanks to satellite linkages, these are not cost effective to the farmer, as they have to deal with increasing amounts of data flow.

The current method farmers use is to collect data from the field on an SD card or thumb drive and plug it into their home computers. They transfer this data for analysis to services where crop consultants or co-operative experts are available. The entire process of turnaround takes a few days.

What farmers need is end-node farming equipment with the necessary computing power. This could help with processing and editing the raw data and sending only the relevant part direct to a cloud service. This requires an automated process and a real-time operation. With farms getting bigger, farmers need to cover much more acreage, while dealing with labor shortage and boosting yields in their farms.

Build a Humanoid Robot from Raspberry Pi

Raspberry Pi or RBPi is the ubiquitous low-cost, credit card sized single board computer with huge potential starting from teaching youngsters computer programming to driving robots on Mars. However, when Tyler Spadgenske tried his hands on RBPi, he used the SBC to create Andy – a completely open-source humanoid robot.

Tyler has tried to make Andy a connected robot. Andy can connect to humans through speech, using language as humans do – for answering questions. With access to the Internet, he (Tyler assures Andy is male) can also talk to client programs over the Web. With ability to connect via Bluetooth, Andy communicates with other robots such as the Mindstorms NXT.

Using a bipedal mechanism that offers him mobility, Andy can do additional tasks such as moving stuff. Of course, Andy has his limitations, but then, he can collaborate with other robots to get those things done, which he cannot. Tyler has given Andy only speech as the user interface, since he feels a humanoid should have no other. However, that does not limit Andy from interfacing with other computers over the Internet, because basically, he is a computer himself.

Initially Tyler was using Robosapien for Andy’s bipedal movement, but that did not work out satisfactorily. He is using a new bipedal system using SolidWorks. Later, Tyler plans to add a torso, a head and arms for Andy, again using SolidWorks and 3D printing.

Starting up Andy is very simple – flip the switch on his back to the on position. Andy has LiPo batteries rated for 11.1V, 1.3A and 1300mAH. These power his motors through the L298 motor drivers, which the RBPi drives. As soon as the RBPi receives power, which is regulated with a UBEC, it starts executing Andy’s software. This begins with some configuration checks such as for starting the server and running some modes. Then Andy settles down and prepares to listen to his microphones.

Now, Andy is up and running as a state machine. He will listen to commands from either his microphones or his server – first converting any command received from either to text and then executing it.

After converting the command to text form, Andy interprets it by comparing it to the command set in his repertoire. That gives him the correct function he must execute for a specific command. For example, for a shutdown command, Andy initiates a complete sequential software and hardware shutdown, ultimately switching himself off. For any other command, however, Andy executes it and then goes back to wait for commands from his microphone or server.

Andy’s brain, the RBPi, controls almost everything for him, including speech recognition and motor control to Andy’s software. Andy has three L298 motor drivers, with each capable of controlling and driving two motors each. Therefore, Andy is capable of driving a total of six motors. As the RBPi had only a limited GPIO pins, Tyler had to expand them using an MCP23017 chip.

Tyler plans to give Andy 10 degrees of freedom with the new SolidWorks hardware. His new features will include monitoring the battery voltage, a power on LED, an LED output with five segments and ten servos – six for the arms and four for the legs.

The GoPiGo Robot Kit for the Raspberry Pi

Making a robot work with the tiny computer Raspberry Pi or RBPi has never been so easy. If you use the RBPi robot kit GoPiGo, all you will need is a small screwdriver with a Phillips head. The GoPiGo kit comes in a box that contains a battery box for eight or 6 AA batteries, two bags of hardware, two bags of acrylic parts, two motors, the GoPiGo board and a pair of wheels. For assembling all this into a working robot, follow these step-by-step instructions.

You start with the biggest acrylic part in the kit, the body plate or the chassis of the GoPiGo. Lay the plate on the GoPiGo circuit board and align the two holes with those on the circuit board. Place two short hex spacers in the holes below the body plate to make sure of which way is the upper side.

Next, you must attach the motors to the chassis. Use the four acrylic Ts in the kit for attaching two motors. Do not over tighten the bolts while attaching the motors, as this may crack the acrylic.

With the motors in place, it is time to attach the two encoders, one for each motor. These encoders fit on the inside of the motors and poke through the acrylic chassis of the GoPiGo. Encoders are an important part, providing feedback on speed and direction of rotation of the motor. If the encoders will not stay on, use blue ticky tacky to make them stay.

Now it is time to attach the GoPiGo board to the chassis. Place the GoPiGo board on the spacers and line its holes with the holes in the board before holding them together with screws. Two hex supports on the back of the GoPiGo board allow you to attach the castor wheel.

That brings us to attaching the wheels to the GoPiGo. You must do this gently, backing the wheels so they do not touch or rub against the screws. The battery box comes next, to be placed as far back on the chassis as possible. This gives it extra space and prevents the box from hitting the SD card on the RBPi.

This completes the mechanical assembly of the GoPiGo robot and only the RBPi remains to be attached. Locate the black plastic female connector on the GoPiGo and slide the GPIO pins of the RBPi into this connector. The RBPi remains protected by a protected plate or a canopy that has to be attached by screwing it on to the chassis.

Make the electrical connections according to the instructions. Be careful while flashing the GoPiGo hardware and leave the motors unconnected during the flashing. After connecting the GoPiGo for the first time, if you find any motor running backwards, simply reverse its connector.

GoPiGo comes with an ATMega 328 micro-controller, operating on 7-12VDC. SN7544 ICs handle the motor control part, which has two optical encoders using 18 pulse counts per rotation and a wheel diameter of 65 mm. External interfaces include single ports of I2C, Serial, analog and digital/PWM. The idling current consumed is about 3-500 mA, and full load current is 800 mA – 2A with both the motors, the servo and the camera running with the RBPi model B+.

Solar Powered Drone Beams Internet

Certain regions of the Earth are presently out of the ambit of the Internet. Nearly 10% of the population or more than 4 billion people live so far from fiber optic cables or cell towers that they are unable to reach the Internet. Facebook is set to end this isolation by having a drone fly overhead while beaming Internet down to such areas.

At their Connectivity Lab, which is a division of Facebook’s Internet.org, researchers confirm the completion of such a drone. This is the first step Facebook is taking before it builds a larger fleet. They have not yet flown the craft, but Facebook has already been testing their concept over the UK with versions one-tenth the size. They intend to conduct flight tests of the full-size drone before the end of this year.

Facebook will be using the solar-powered V-shaped carbon fiber craft, named Aquila or Eagle (in Latin), for beaming down wireless Internet connectivity to expand Internet access. About a year ago, Facebook launched Internet.org. Although their intentions were to provide Internet access to those in the world who do not have a reliable connection, the project has received a lot of dissension for not adhering to net neutrality – especially in India.

Facebook has designed and built Aquila in 14 months. The drone will fly in the air for 90 days without touchdown. To launch it up into the air initially, technicians will be attaching Helium balloons to the plane.

With a wingspan of 46 yards or 42 meters, Aquila has to move constantly to stay aloft. Therefore, it will circle a three-km or two-mile radius. During the day, when the craft can generate energy from the sun, it will float up to 90-thousand feet or 30 Km. However, the craft drifts down to 60-thousand feet or 20 Km at night for conserving energy. While not planning to sell the drones at present, Facebook intends to use them for expanding Internet access.

The research team has been able to increase the data capacity of the lasers involved in the project. This is one of the biggest breakthroughs as the new system can communicate at speeds of 10 GB per second using a ground-based laser to talk to the dome on the underbelly of the plane. This is about 10 times faster than the current capabilities allow.

Facebook is not alone in their endeavors to bring wireless Internet to rural regions. Rivals Google also have a program up their sleeve – Project Loon. They plan to put up high-altitude Helium balloons with transmitters attached. Although Google has not launched their project yet, they claim it is in a more advanced stage compared to where Aquila is at present.

Therefore, very soon, you may see a huge 900 lb. drone nearly the size of a Boeing 737, slowly circling 11 miles up in the sky. Currently, Facebook’s mission is mired in controversy. All over the world, critics are questioning several practices of Facebook’s Internet.org on security, fairness and privacy grounds. There is a danger countries may spy on and repress their citizens. In addition, first-time users of the Internet might be limited to what Facebook provides them as news and information.

Playing 4-Bot with the Raspberry Pi

Sometime or the other we have all played Connect-4 or Four-in-a-row against either a human or a computer opponent. It is a simple game where you and your opponent each try to get four same-color pieces in a row, while trying to prevent the other from doing so. The first one to line up four adjacent pieces of the same color wins the game.

Conventionally, the game board has 42 squares made of six rows and seven columns. Players start with several discs of two colors each, and to be successful, each player has to constantly plan and revise their strategy. Therefore, an SBC or single board computer such as the Raspberry Pi, or RBPi is a suitable candidate for playing Connect-4. Besides enjoying the game, you hone your skills as a DIY enthusiast by building the game. Of course, this project will require some skill in mechanical assembly, and in coding as well.

You can have a horizontal board and an X, Y arm mechanism to let the RBPi deliver its pieces to the required square. However, a vertical board makes the mechanism simpler, as the arm then has to travel only in one axis, gravity taking care of the other. The vertical board is actually made of two faces, with a gap in between and separators to mark the columns to allow the discs to be dropped in one of the columnar spaces between the two faces. Both faces have 42 matching circular cutouts, so it is easy to see where each disc is positioned. A claw on the arm mechanism picks up a disc from a stack, positions itself above the required column, and releases the disc, allowing it to fall in the column between the board faces.

The software requires the use of Python Imaging Library for processing the image of the game board. To enhance readability, the image can be down-sampled to 16 colors, and then divided into a grid. It is only required to identify each of the 42 spaces on the board as red, yellow, or empty. This is easily done by reading the RGB value of each space in the grid, and saving this data in the form of an array. This forms the board state after every move and this is passed on to the AI or Artificial Intelligence on the RBPi for calculating the next move.

The AI used is a well-known algorithm known as Minimax – applicable to games of this nature, and there is a Python library for Minimax. Using tree-searching methods, the algorithm looks several steps ahead to calculate the next best move. Getting the RBPi to play effectively can be quite a challenge, as even a small Connect-4 board of 6×7 squares can have 4,531,985,219,092 possible game positions. Therefore, the program tries to trade-off between absolute perfect play and reasonable time for each move. If you can strike a balance between the two, the RBPi can play quite intelligently, but still complete each move in about 25 seconds – this is acceptable for a flowing game.

CORATAM with the Raspberry Pi

The ubiquitous Single Board Computer, the Raspberry Pi, or the RBPi is a perfectly suitable candidate for CORATAM or Control of Aquatic Drones for Maritime Tasks. Sitting within each drone, an RBPi becomes a part of a swarm of robotic systems. Portugal is using this novel method for exploring and exploiting its maritime opportunities as the sea is one of the country’s main resources. Although land-based and air-based swarms of robots have been extensively used for studying the aquatic environment for the proposed expansion of Portugal’s continental shelf, swarms in aquatic environments are a different breed altogether.

Tasks in aquatic environment are usually expensive to conduct. This is because of all the special operational requirements of manned vehicles and support crews. Therefore, Portugal has thought of an alternative approach where they have used collectives of relatively simple and inexpensive aquatic robot swarms. As each robot is easily replaceable, these have a high potential of applicability for essential tasks such as prospecting sites for sea border patrolling, bridges inspection, sea life localization, environmental monitoring, aquaculture, and so on.

The collectives of robots work on a decentralized control based on the principles of self-organization. This gives them the capability of performing efficiently on tasks that require robustness to faults, scalability, and distributed sensing.

With the development of CORATAM, Portugal is hoping to achieve three main objectives. The first is to explore the novel approach of control synthesis in a set of maritime tasks, but in the real world. The second is to develop a swarm of aquatic robots with fault-tolerant ad-hoc network architecture, heterogeneous in nature and scalable. The third is to disclose all the hardware and software components developed under an open-source license, to enable others to build their own aquatic robots.

Each robot is about 60 cm in length, and inexpensive, as the designers have used all widely available, off-the-shelf hardware. Each robot uses a differential drive mono-hull boat, which can travel at a maximum speed of 1.7 m/s, in a straight line. The maximum angular speed the robots can achieve is 90°/s.

An RBPi-2 SBC supports the on-board control of each robot. They communicate via a wireless protocol (802.11g Wi-Fi) and each broadcasts its UDP datagram. The neighboring robots and the monitoring station receive the broadcast, forming a distributed network without any central coordination or a single point of failure. All robots are equipped with compass sensors and GPS, and each broadcasts its position to the neighboring robots every second.

All robots use prototype hardware, making it inexpensive when compared to the majority of the commercially available unmanned surface vehicles. Therefore, the robots serve as a platform suitable for research and development, and easily maintainable. Additionally, the open source nature of the platforms makes them suitable for different manufacturing processes, sensory payloads, design choices, and different actuators to be used.

An artificial neural network-based controller controls each robot. The normalized readings of the sensors form the inputs of the neural network, while the output of the network controls the actuators on the robots. Each sensor reading and actuation value is updated every 100 ms.

Raspberry Pi and a Simple Robot

Using a pair of DC motors and connecting them to two wheels can be the basics of a simple robot. Once you add a single board computer to this basis structure, you can do almost whatever your like with your robot. However, making a robot do more than simply run around requires many mechanical appendages that may prove difficult to get unless you have access to a workshop or you are proficient with 3D printing.

To simplify things for beginners, the robot chassis from Adafruit is a versatile kit. With this simple robot kit and a single board computer such as the Raspberry Pi or RBPi, you can start your first lessons in robotics.

As the kit is for beginners just starting with their first robot, there are no sensors. A Motor HAT (Hardware Attached on Top) controls two motors connected to two wheels on a chassis. The front of the chassis has a swivel castor, which makes it stable. The RBPi mounts on the chassis and a battery supplies the necessary power for the SBC and the motors.

Once you are familiar with generating a set of instructions in Python to make the robot move the way you want it to, you can start adding sensors to the kit. For example, simply adding a camera will allow the robot to see where it is going. Adding an ultrasonic range finder will allow the robot to avoid bumping into obstacles in its path.

The Mini Rover Robot Chassis Kit from Adafruit includes almost everything one needs to build a functional robot. It has an anodized aluminum chassis, two mini DC motors, two motor wheels, a front castor wheel, and a top plate with standoffs for mounting the electronics.

It is convenient to use the latest RBPi models such as the Model 2, B+, or A+, as these have suitable mounting holes that allow easy attachment to the robot chassis. Although it is also possible to use the RBPi Zero, its small size makes it unsuitable to mount the motor HAT securely.

The Motor HAT can drive DC and stepper motors from the RBPi and is suitable for small robot projects. The brass standoffs help to hold the Motor HAT securely to the RBPi. Power comes from two sources. One 4x AA battery pack supplies the motors. Another small USB battery pack powers the RBPi. The RBPi also requires a Wi-Fi dongle to keep it connected to the computer and to control the RBPi robot.

Your RBPi must be running the latest version of the Operating System – Raspbian Jessie. If you do not have this, allow the RBPi to access the Internet and download the necessary software.

The Motor HAT library examples included provide adequate software for this project to start. For example, you can use the example scripts provided to make the robot move forward, backward or to turn in different directions. Preferably, place the robot on level ground, where there are no obstacles. As the robot has no sensors, it can hit something or easily fall off the edge of a table.

What Can the Raspberry Pi Do After Dark?

A lot more goes on in the museums of the world at night, after everyone has vacated the premises and the guards have locked up the place, than one can imagine. The situation may not be as dramatic as what Ben Stiller shows us in the movie, “Night at the Museum,” but still, it does warrant a serious investigation. This is what Tate Britain has done with its After Dark project with help from the inaugural IK Prize.

Tate Britain has one of the largest art collections in the world. In August 2014, it organized a project After Dark, where visitors could experience the thrill of a prohibited voyage, without once stepping into the museum. For 25 hours, more than 100,000 viewers across the globe saw live streaming video over the Internet from four robots let loose in the darkness of the museum. Additionally, 500 people could take control of the robots for approximately 12 minutes each, guide them as they like and see what the robots were witnessing.

RAL Space has engineered the robots, which are based on the tiny single board computer, the Raspberry Pi or RBPi. Working alongside the UK Space Agency or UKSA, RAL Space is one of the world’s leading centers for the research and development of space exploration technologies.

RAL Space worked in close collaboration with Tate Britain, and the team behind the project After Dark combined the latest software with the bespoke RBPi hardware. They designed and engineered the robots, creating a world-first, one of a kind experience and attracted audiences from all over. The Workers, a digital product design studio, designed the Web interface for After Dark.

For the late night explorations within the museum, people from all over the world get to guide four robots by taking control of any one of them. RAL Space has designed the robots to select new operators for driving them every few minutes. As long as the event is live, people can request control of a robot from the project website. The robots know you are waiting, and as soon as a slot frees up, will try to take you on a ride. Even while you wait, you can watch the video of the event being streamed live and appearing on the project website, and on Livestream.com.

You can use the on-screen buttons on the web-based control interface or the arrow keys on your keyboard for controlling the robot. You can make the robot move forward or turn, and even make it look up or look down. The robot senses obstacles around it, feeding this information back to you. Therefore, even though it is nearly dark, you, the navigator, can operate the robot easily.

If you take the robot too close to an object, it will stop moving and inform you through the web-based control interface. Once that happens, you still have control over the robot, as you can make it turn on the spot and let it move forward, continuing with the journey, provided the path ahead is clear.

A Portable Raspberry Pi Powered display

If you have a motor to control, the RasPiRobot Board is a very good fit. Apart from controlling motors, you can also use its switch mode voltage supply to power your RBPi or Raspberry Pi using a large range of battery types. Therefore, with a pack of AA type batteries and the RasPiRobot shield, you can make a very convenient and portable RBPi powered display.

To make an RBPi display that will show the current time as a scrolling text, you need to collect a few parts. These would be – the Adafruit Bicolor square Pixel LED Matrix along with its I2C backpack, A RasPiRobot Board version 2, a battery holder with on/off switch suitable for holding 4xAA batteries and the RBPi Model B+ with 512MB RAM.

Not much of wiring is involved in setting up the parts together. The only soldering you will need to do involves the LED Matrix display, as this comes in a kit form. This is not too difficult as all the instructions are included inside the kit. Once soldering is over, fit the LED Matrix display into the I2C socket of the RasPiRobot Board.

If you are using the latest version 2 of the RasPiRobot board, you have to be careful its extended header pins do not reach up to the bare connections on the underside of the LED Matrix module. In case they do, you will need to insulate the module by covering the header pins with a layer or two of electrical insulating tape.

Next, plug in the RasPiRobot Board on top of the RBPi. Just make sure the RasPiRobot board fits over all the GPIO pins on the right hand side of the RBPi. The RasPiRobot Board has two screw terminals marked GND and Vin. From the battery box, attach the flying leads to these screw terminals taking care of the correct polarity.

Fit four rechargeable AA batteries to the battery holder. Make sure they are fully charged and fitted with the correct polarity. When you turn on the switch on the battery holder, you should see the RBPi light up its power LED as well as the two LEDs on the RasPiRobot Board.

To operate the LED Matrix board from the RBPi, you will need to install the Adafruit I2C and the Python Imaging Libraries – follow the instructions here. The guide also has a few examples to allow you to check the working of your I2C interface and consequently the LED Matrix display. For example, you can have a slow display scrolling text on the LED Matrix, showing the current time.

The LED Backpack library has a number of sub-libraries that handle the low-level interface to the matrix display. The Python Imaging Library handles the job of writing text onto the display as an image. This uses the True type Font FreeSansBold size 9 from the library, although you can use other fonts as well that look good. You may need to experiment with the fonts, as they are not primarily intended to be displayed in the 8×8 pixels the matrix uses. You can select the color of the display also.