Category Archives: Robotics

Developments in Autonomous Robots

The recent COVID pandemic had put a lid on air travel. But that is now slowly lifting, and more people are venturing out. Airports are responding with new robots offering food delivery services.

The International airport in Northern Kentucky is currently using these Ottobots, made by Ottonomy, a robotics company. The Ottobot is a four-wheeled autonomous robot.

At the airport, in the Concourse 8 area, travelers can use a dedicated app to purchase food, beverages, or travel products from select stores. The location of these stores may be anywhere in the airport. Once the travelers have placed their orders, staff, at the store, place the items within the cargo compartment of the Ottobot and send them on their way.

While making its way through the airport, the Ottobot robot uses sensors and a LIDAR module to avoid people and obstacles. Ottonomy has designed a contextual mobility navigation system for the robots to allow them to keep track of their whereabouts. Apart from the contextual mobility navigation system, the robots also use other indoor navigational systems like Bluetooth beacons, readable QR codes, and Wi-Fi signals.

Customers can see the Ottobot on their mobiles, thanks to the app, which alerts them once it reaches their location. The app also has a QR code specific to their order. Once the customer holds their QR code for the robot to scan, it unlocks and opens its cargo compartment lid to allow them to retrieve their purchase. User feedback from a pilot project in the airport helped design the current robotic delivery system.

Not only in airports but there are several urban delivery robots also that use four wheels to move along city sidewalks. The wheels are special, as they can pivot and are mounted on articulated legs.

Delivery robots usually have smart lockable cargo boxes and two sets of powered wheels on their bottom. While autonomously moving along a smooth pathway, this arrangement works fine. However, for moving over curbs, climbing upstairs, or for traversing regular obstacles.

Piezo Sonic, a Japanese robotics company, has developed Mighty, the special delivery robot. They have based their design on a concept for robots exploring the moon:—: it does not have smooth sidewalks.

Mighty has four independently powered wheels. They can point either straight ahead for normal movements, or pivot to point sideways to allow the robot to move sideways in one direction or the other. The four wheels can also pivot part of the way outward or inward, forming a circle for Mighty to spin around on the spot.

Additionally, each wheel has its own hinged leg. Therefore, when the robot moves over an uneven surface, each leg can bend independently to compensate for the difference in height. This helps to keep the main body of the bot level. Mighty can use this feature to climb shallow sets of stairs.

Mighty uses GPS to navigate cities like other delivery robots. It also has cameras and LIDAR sensors for dodging hazards and pedestrians. It can easily carry a 20-kg cargo, climb 15-degree slopes, and step over obstacles up to 15 cm tall, all the while attaining a top speed of 10 km per hour.

A Google Assistant with the Raspberry Pi

This is the age of smart home assistants, but not the human kind. The last couple of years a fever pitch has been building up over these smart home assistants, and every manufacture is now offering their own version. While Apple offers Siri, Amazon presents Echo and Alexa, Microsoft wants us to use Cortana, and Google tempts us with Google Home Assistant, there are several more in the race. However, in this melee, Raspberry Pi (RBPi) enthusiasts can make their own smart speaker using the SBC.

Although you can buy Google Home, the problem is it is not available worldwide. However, it is a simple matter to have the Google Assistant in your living room, provided you have an RBPi3 or an RBPiZ. Just as with any other smart home assistant, your RBPi3 home assistant will let you control any device connected to it, simply with your voice.

The first thing you need to communicate with your assistant is a microphone and a speaker. The May issue MagPi, the official RBPi magazine, had carried a nice speaker set sponsored by Google. However, if you have missed the issue, you can use any speaker and USB microphone combination available. The MagPi offer is an AIY Voice Kit for making your own home assistant. AIY is an acronym coined from AI or Artificial Intelligence, and DIY or DO it Yourself.

The MagPi Kit is a very simple arrangement. The magazine offers a detailed instruction set anyone can follow. If you do not have the magazine, the instructions are available on their AIY projects website. The contents of the kit include Voice HAT PCB for controlling the microphone and switch, a long PCB with two microphones, a switch, a speaker, an LED light, a switch mechanism, a cardboard box for assembling the kit, and cables for connecting everything.

Apart from the kit, you will also require additional hardware such as an RBPi3, a micro SD card for installing the operating system, a screwdriver, and some scotch tape.

After collecting all the parts, start the assembly by connecting the Voice HAT PCB. It controls the microphones and the switch, and you attach it to the RBPi3 or RBPiZ using the two small standoffs. Take care to align the GPIO connectors on the HAT to that on the RBPi, and push them in together to connect.

The combination of the HAT board and RBPi will go into the first box. You will need to fold the box taking care to keep the written words on the outside. Place the speaker inside the box first, taking care to align it to the side with the holes. Now, connect the cables to the Voice HAT, and place the combination inside the box.

Next, assemble the switch and LED, inserting the combination into the box. Take care to connect the cables in proper order according to the instructions. As the last step, use the PCB with the two microphones, and use scotch tape to attach it to the box.

Now flash the SD card with the Voice Kit SD image from the website, and insert it into the RBPi. Initially, you may need to monitor the RBPi with an HDMI cable, a keyboard, and mouse.

Meca500 – The Tiny Six-Axis Robot

Although there are plenty of robots available in the market for a myriad jobs, one of the most compact, and accurate robot is the Meca500. Launched by the Quebec based Mecademic from Montreal, the manufacturers claim it is the smallest, and most precise six-axis industrialist robot arm in the market.

According to Mecademic, users can fit Meca500 easily within an already existing equipment and consider it as an automation component, much simpler than most other industrial robots are. According to the cofounder of Mecademic, Ilian Bonev, the Meca500 is very easy to use and interfaces with the equipment through Ethernet. With a fully integrated control system within its base, users will find the Meca500 more compact than other similar offerings are in the market.

Mecademic has designed, developed, and manufactured several compact and accurate six-axis industrial robot arms on the market. Meca500 is one of their latest products, the first of a new category of small industrial robots, smaller than most others are, and ultra-compact.

The first product from Mecademic was DexTar, an affordable, dual-arm academic robot. DexTar is popular in universities in the USA, France, and Canada. Although Mecademic still produces and supports DexTar on special request, they now focus exclusively on industrial robots, delivering high precision, small robot arms. With their academic origins, Mecademic has retained the predilection of their passion for creativity and innovation, and for sharing their knowledge.

With the production of Meca500, a multipurpose industrial robot, Mecademic has stepped into Industry 4.0, and earned for itself a place in the highly automated and non-standard automated industry. With Meca500, Mecademic offers a robotic system that expands the horizons for additional possibilities of automation, as users can control the robot from their phone or tablet.

This exciting new robotic system from Mecademic, the Meca500 features an extremely small size, only half as small as the size of the smallest industrial robot presently available in the market. Meca500 is very compact, as the controller is integrated within its base and there is no teaching pendant. The precision and path accuracy of the robot is less than 5 microns, and it is capable of doing the most complex tasks with ease.

Applications for Meca500 can only be limited by the users’ imagination. For instance, present applications for the tiny robot include a wide range, such as animal microsurgery, pick and place, testing and inspection, and precision assembly.

Several industry sectors are currently using Meca500. These include entertainment, aeronautics, cosmetics, automotive, pharmaceuticals and health, watchmaking, and electronics. Users can integrate the compact robot within any environment, such as their existing production line or even as stand-alone system in their laboratories.

The new category of robots from Mecademic is already smaller, more compact, and more precise than other robots are in the market. Mecademic’s plans for the future include offering more space saving, more accurate, and easier to integrate industrial robots. They envisage this will enable new applications, new discoveries, new products, new medical treatments, and many more. Their plan is now to build a greater range of compact precision robots while becoming a leading manufacturer of industrial robots.

Robotics and Motion Control

Across the industrial space, automation is a growing trend in factory floors throughout the world. This is essential to improve the efficiency and production rates. When creating the automated factory, engineers may introduce a robotic system or implement a motion control system. Although both can essentially accomplish the same task, they have their own unique setups, motion flexibility, programming options, and economic benefits.

The Basics

A straightforward concept, motion control initiates and controls the movement of a load, thereby performing work. A motion control system is capable of precise control of torque, position, and speed. Motion control systems are typically useful in applications involving rapid start and stop of motion, synchronization of separate elements, or positioning of a product.

Motion control systems involve the prime mover or motor, the drive, and its controller. While the controller plans the trajectory, it sends low-voltage command signals to the drive, which in turn applies the necessary voltage and current to the motor, resulting in the desired motion.

An example of a motion control system is the programmable logic controller (PLC), which is both noise-free and inexpensive. PLCs use the staple form of ladder-logic programming, but the newer models also have human-machine-interface panels. The HMI panels offer visual representations of programming the machine. With PLCs, the industry is able to control logic on machinery along with control of multiple motion-control setups.

Robots are reprogrammable, multifunctional manipulators that can move material, tools, parts, or specialized objects. They can be programmed for variable motion for the benefit of performing a variety of tasks.

Most components making up the motion control system are also a permanent part of robots. For instance, a part of the robot’s makeup includes mechanical links, actuators, and motor speed control. The robot also has a controller, which allows different parts of the robot to operate together with the help of the program code running in the controller. Most modern robots operate on HMI that use operating systems such as Linux. Typical industrial robots take many forms such as parallel picker, SCARA, spherical, cylindrical, Cartesian, or a simple articulated robotic arm.

Robot systems also make use of drives or motors to move links into designated positions. Links form the sections between joints, and robots can use pneumatic, electric, or hydraulic drives to achieve the required movement. A robot receives feedback from the environment from sensors, which collect information and transmit it to the controller.

The Differences

While the robot is an expensive arrangement, a motion control system has components that are modular, and offer greater control over cost. However, motion controller components require individual programming to operate, and that puts a greater knowledge demand on the user.

Motion control systems, being modular, offer the scope to mix and match old hardware with the new. This facilitates multiple setups, with modular configuration ability, and applicable cost constraints.

With hardware differences between products decreasing rapidly, purchasing decisions are now mostly based on the software of the system. For instance, most modern systems are plug-n-play type, and they rely more on their software for compatibility.

How Good are Cobots at Welding?

The manufacturing industry has been using robots widely for several years as a replacement for the human laborer. Recent advances in this field are the Cobots or collaborative robots. They are called collaborative as their design makes them work alongside an individual as a part of a team rather than replacing the humans.

Cobots are good at operations and activities that cannot be fully automated. However, the process speed does not improve for activities such as workers ferrying parts backwards and forwards between themselves on the assembly line with the robots locked away in cages.

Manufacturers such as Ford are already on the cobot bandwagon, and the new robots could transform the way the industry works. The Ford factory has collaborative robots installing shock absorbers on vehicles on the production line along with humans. The cobots work with accuracy and precision, boosting the human productivity, while saving them valuable time and money.

At present, the industry uses four main types of cobots. They are the Safety Monitored Stop, Speed and Separation Monitoring, Hand Guiding, and Power and Force Limiting.

The Safety Monitored Stop is a collaborative feature used when the cobot works on its own, but sometimes needing assistance from an operator. For instance, in an automated assembly process, the worker may need to step in and perform an operation on a part that the cobot is holding. As soon as the cobot senses the presence of the human within its workspace, it will cease all motion until the worker leaves the predetermined safety zone. The cobot resumes its activities only after receiving a signal from its operator.

Speed and Separation Monitoring is similar to the Safety Monitored Stop, with the cobot operating in a predetermined safety zone. However, this cobot’s reaction to humans is different. The Cobot will not automatically stop because of the human presence, but will slow down until its vision detection system informs it of the location of the person or object. The Cobot stops only if the person is within a predetermined area, and waits for the proximity to increase before resuming its operations. This cobot is useful in areas with several workers are present, at it requires far fewer human interventions.

Although a Hand Guiding cobot works just as a regular industrial robot does, it has additional pressure sensors on its arm device. The operator can therefore teach the cobot to hold an object hard enough and to move it fast enough without damaging the object, while securely working with it. Production lines that handle delicate components find Hand Guide cobots very useful for careful assembly.

Power and Force Limiting cobots are among the most worker-friendly machines. They can sense unnatural forces in their path, such as humans or similar objects. Their joints are programmed to stop all movement at such encounters, and even reverse the movement.

As many skilled workers retire, and replacements are rare, the American Welding Society is working with Universal Robots, to produce a new attachment to their UR+ line of cobots with welding capabilities. The robot moves along the start and stop path of the desired weld, and welds only the specified stitch areas.

What is an i-Robot?

The level of CO2 in our atmosphere is increasing at alarming levels, affecting all life on Earth either directly or indirectly. For instance, it is related to global warming risks, reducing the quantity of ice in the polar regions, which in turn changes the level of seas all around as the ice melts. This has significant consequences on several human activities such as fishing. It also affects the submarine environment adversely, together with the associated biological sphere. For long, scientists have been monitoring the marine environment and studying the status of the seas.

However, the harshness of the marine environment and/or the remoteness of the location preclude many explorations under the sear by vehicles driven by the mother ship. Scientists are of the view robots could effectively contribute to such challenging explorations. This view has led to the development of Autonomous Underwater Vehicles or AUVs.

One such AUV is the Semi-Autonomous Underwater Vehicle for Intervention Mission or SAUVIM, and is expected to address challenging tasks as above. The specialty of SAUVIM is its capability of autonomous manipulation underwater. As it has no human occupants and no physical links with its controller, SAUVIM can venture into dangerous regions such as classified areas, or retrieve hazardous objects from deep within the oceans.

This milestone is a technological challenge, as it gives the robotic system the capability to perform intervention tasks such as physical contact with unstructured environment but without a human supervisor constantly guiding it.

SAUVIM, being a semi-autonomous vehicle, integrates electronic circuitry capable of withstanding the enormous pressure deep ocean waters generate. In general, it can operate in the harsh environmental conditions—low temperatures of the deep oceans—in a reliable and safe manner. Ensuring the effectiveness of such robots requires a high level of design and accurate choice of components.

As SAUVIM operates semi-autonomously, it needs huge energy autonomy. For this, Steatite, Worcestershire, UK, has introduced a new solution in the form of long-life batteries, ones capable of operating in submarine environment. These Lithium-Sulfur (Li-S) battery packs, a result of the first phase of a 24-month project, improves the endurance and speed of autonomous underwater vehicles when deep diving.

Primary advantages that Li-S batteries offer are enhanced energy storage capability to provide improvements in operational duration, despite being constructed from low-cost building materials.

The National Oceanography Center in Southampton, UK, completed the first phase of the Li-S battery project, after repeatedly testing the cells at pressure and temperatures prevailing in undersea depths of 6 Kms. According to the tests, Li-S cells can deliver performances similar to those at ambient conditions, while their effective Neutral Buoyancy Energy Density or NBED is almost double that offered by Li-ion cells used as reference. Life tests, performed on a number of Li-S cells demonstrate they can reach over 60 cycles with slow discharge, and 80 cycles with fast discharges.

The energy within an AUV is limited, which also limits its endurance. Therefore, to conserve the available energy, speeds of AUV are usually kept low at 2-4 knots. Therefore, to enhance or expand this operational envelope, it is necessary to increase the energy available within the vehicle, and the Li-S batteries do just that to increase the vehicles range and speed.

Computer Vision & Robotics in Farming

Robots are helping several industries ease labor concerns. This is increasingly so in today’s industrial environment, where the workforce is aging and work output decreasing for lack of efficiency. This includes agricultural fields in both the US and Europe, where the introduction of robots in fieldwork is helping to reduce current labor concerns. New farming standards brought on by natural and chemical resources is increasing the need for precision work, and robots and new technology is helping to alleviate that.

According to Dan Harburg, the design of traditional robots allowed them to perform only specific tasks repeatedly. Dan is with Anterra Capital, an agriculture technology venture capital firm from Amsterdam. According to Dan, robots for agricultural applications must be more flexible than those in automotive manufacturing plants are, as the former need to deal with the natural variation in outdoor environment and food products.

Accordingly, Anterra Capital considers investing in three major category areas in agriculture related to seeding and weeding, harvesting, and environmental control. Dan envisages each of these major categories would benefit from the introduction of advanced technology and robotics.

For instance, farmers get a two-fold benefit from spraying and weeding robots. First, these robots reduce the labor necessary for eliminating such mundane tasks. Second, by precise targeting of crops, the robots bring down the quantity of pesticides necessary to be sprayed. This allows farmers to save on labor costs and produce safer, healthier crops at the same time. For instance, see and spray robots from Blue River reduce agrochemical use by about 90%, as they use computer vision for targeting weeds.

Different technology companies have developed spray robotics. One among them is the Blue River Technology, a farm robotics start-up specializing in spray and weeding robots. According to Blue River, its tractors operate at 6-8 mph, covering 8-12 rows of crops simultaneously. Advanced vision systems on the tractors enable them to differentiate weeds from crops to provide direct targeting as it passes over them.

Autonomous robots from Naio Technologies use laser and camera guidance systems for navigating between rows of fruit s and vegetables autonomously, identifying different types of weeds. Oz, the robot from Naio, runs on four electric engines, working continuously for three hours before needing a battery recharge. Not needing human supervision, Oz follows rows of crops on the plot autonomously, to remove all weeds.

PlantTape, from Spain, offers a plant-transplanting robot that can plant seeds. The robot fully integrates the system of sowing, germination, and nursery care. This brings much higher efficiency as compared to that from conventional transplanting methods. The robot creates trays of tape for holding soil and seeds, with each tray holding nearly 900 plants. While pulling the tape from the tray, the automated robot tractor cuts the tape around each plant as it places the plant accurately in the soil. Farmers use PlantTape robots for planting tomatoes, onions, celery, cauliflowers, broccoli, and lettuces.

Although automation in harvesting crops is common, the variation in size, height, and color of the plants compounds the problem. They also require light pressure and touch for delicate picking. Vacuum suction robotic arms help in this area.

What is a Hygrobot?

In the future, tiny robots such as the hygrobot will be able to avoid the need for batteries and electricity to power them. Like a worm or a snake, moisture will power these tiny wriggly robots.

Hygrobots actually inch forward by absorbing humidity from their surrounding environment. Created by researchers at the Seoul National University, South Korea, these tiny robots can twist, wriggle forwards and back, and crawl just as snakes or worms do. The researchers envisage these hygrobots being useful for a variety of applications in the future, which could include delivering drugs within the human body.

According to the researchers, they received the inspiration for hygrobots from observing plants, and they have described their findings in the journal Science Robotics. Using hydroexpansion, plants change their shape and size when they absorb water from the air or ground. For instance, pinecones know when to close and when to open, depending on whether the air is wet or dry, and this helps them to disperse seeds more effectively. Even earlier, plants have provided inspiration for robots—researchers created robots in imitation of algae.

Although hygrobots are not made of plant cellulose, they mimic the mechanism the plants use. As moisture is available almost everywhere, using it as a source of power for operating robots makes sense. Unlike batteries, moisture is non-toxic, and does not have the tendency to explode. This is an important consideration, as microbots, for instance the spermbot, are usually required to operate within the human body.

One can visualize the motion of hygrobots by observing the Pelargonium carnosum seed bristle—a shrub-like plant found in Africa. The hygrobot mimics the motion of the bristles, as it has two layers made of nanofibers. While one layer absorbs moisture, the other does not.

Placing the bot on a wet surface causes the humidity-absorbing layer to swell up, making the bot move up and away from the wet surface. This allows the layer to lose moisture and dry up, and the bot comes back down—the cycle repeating itself—allowing the bot to move. The researchers demonstrated a hygrobot coated with antibodies crawling across a bacteria-filled culture plate. It could sterilize the entire plate without requiring any artificial power source.

This is how the researchers imagine the bots of the future will deliver drugs within the human body, propelling themselves using only the moisture of the skin. Other than responding only to water vapor, researchers say they could equip them with sensors that respond to other gases as well.

However, this is not the first instance of scientists working with tiny robots. Last year, researchers had created a hydrogel bot for biomedical applications that a magnet could activate. It was able to release localized chemo doses for treatment of tumors.

Not only medical, military, and industrial applications will also benefit from light and agile microbots that do not require additional power inputs to operate. Hygrobot, the biologically inspired bilayer structure harnessing energy from the environmental humidity uses ratchets to move forward. The hygroscopically responsive film quickly swells and shrinks lengthwise in response to a change in humidity.

Raspberry Pi Helps a Hexapod Robot Walk

Roland Pelayo has used the single board computer, the famous Raspberry Pi or RBPi to help a hexapod robot learn to walk. The RBPi allows the robot to run in an autonomous mode, so it walks without assistance, avoiding obstacles. Alternately, it can also operate in a manual mode, whereby a user with a smartphone can control the robot. Most interestingly, the hexapod walker follows the tripod gait, just as most six-legged insects do.

Roland prefers to use servomotors to control the gait of the hexapod robot. According to Roland, using three servomotors to control the movement of the six legs of the robot, strikes a balance between performance and price. He added another servomotor for moving the eyes of the robot.

The servomotors allow the robot to move in four directions, forward, backward, left turn, and right turn. The robot moves by tilting itself to the left or the right, and then moving the leg lifted by its tilt. Roland has drawn diagrams explaining the movements of the robot. The backward and turn right movement of the robot is basically the reverse of its forward and turn right movement respectively.

Therefore, the front and corresponding back legs of the robot are interconnected to two servomotors, one to the left pair and the other to the right. The third servomotor helps to tilt the robot.

The RBPi allows the hexapod walker to operate in two modes. The first is the autonomous mode, which allows the robot to roam around freely, avoiding obstacles in its path. For instance, if it detects an obstacle in front, the robot walker takes two steps backwards, turns right, and then moves forward again. The second mode is for allowing the user control the movements of the hexapod robot using a smartphone on the same network as the robot is.

Roland has designed the program to allow the RBPi control four servos simultaneously, while reading inputs from a sensor detecting obstacles. The RBPi also connects to a network for the remote wireless control. Using an RBPi for the project was simpler for Roland, as the RBPi features on-board wireless connectivity.

Roland uses three Tower Pro SG-5010 servomotors, two for moving the legs and the third for tilting the hexapod walker. A fourth micro servo motor, a Tower Pro SG-90, helps to move the head and the eyes. An RBPi2 fitted with a USB Wi-Fi dongle helps to control the four servomotors. While the RBPi runs on a small power bank, the servomotors have their own separate power source. An ultrasonic sensor, HC-SR04, performs the obstacle detection.

As the echo produced by the ultrasonic sensor may cross the 3.3 V levels, Roland placed a voltage divider in between to connect to the RBPi, as its GPIO pins cannot accept voltages above 3.3 V.

As Python is already installed on the RBPi, Roland used it to write the program for the Hexapod walker. However, he also needed an extra library called pigpio, mainly for controlling the servomotors. He used SSH to access the RBPi remotely and installed the extra library.

Bio-Inspired Robot Walks with a Rhythm

Walking robots are not new, as robotic engineers have been fascinated by human movements while walking and have tried to incorporate them into their robots. As a result, we have had several walking robots, starting with WABOT I, the first anthropomorphic robot demonstrated in 1973 by I. Kato and his team at the Waseda University, Japan. Almost everyone remembers ASIMO, a humanoid robot introduced in 2000, as an Advanced Step in Innovative Mobility, designed to be a multi-functional mobile assistant.

Where ASIMO moved as if it were scared of falling, the robotic legs developed by the researchers from the University of Arizona are the first model to be walking in a biologically accurate manner. The robotic legs are based on a bio-inspired combination of a musculoskeletal architecture, complete with a neural architecture and sensory feedback.

The human-like gait of the robotic legs comes from three reasons. First, the musculoskeletal system of the robot is very similar to ours, with artificial tendons and muscles, made from Kevlar straps and servomotors, driving the movements. Second, a variety of sensors on the robot provide a continuous feedback regarding the hip position, limb loading, muscle stretch, foot pressure, and ground contact—all necessary to dynamically adjust its gait. Third, a Central Pattern Generator (CPG) controls the movement of the robot at a relatively high level, mimicking the cluster of nerves that serve the same purpose in a human spinal cord.

When we humans walk, we do so almost without thinking about walking. That is because the nerves within our spinal cord allow us to do so. They collect sensory feedback and use it to adjust the rhythm of our walking style. The CPG works the same way for the robot. Just as a baby learns to walk, the CPG too, creates the simplest walking pattern relying on just two neurons, firing alternately.

Babies exhibit this simple walking pattern when placed on a treadmill, even before they have learnt to walk on their own. Once the robot masters this initial simplistic gait, feedback from other sensors provide additional inputs to form a complex network to allow the robot produce a variety of gaits.

As such, the intention of the research on robotic legs is not to help robots walk better, but rather to understand the neurophysiological process that humans and animals use for walking.

These biped robots have yet to demonstrate how to walk truly autonomously on uneven and various terrains robustly, such as humans do in daily life. However, this class of machines is inspiring the design of efficient simple biped robotic systems that exhibit natural passive gaits, optimal in some energetic sense, and analogous to the comfortable walking gait of humans−the aim being to reduce the consumption of metabolic energy per unit distance to a minimum.

Although researchers have been trying to achieve the above idea by simply compensating for the loss of energy by adding a minimum set of actuators to a passive system when the robot is not descending, they have not yet successfully exploited the idea for operational legged robots.