Tag Archives: Robots

Robots with Eyes and Brain

In the manufacturing industry, a huge transformation is taking place—machine vision—and it is growing at astronomical proportions. This includes all types of machine vision. For instance, the market is expecting 3D machine vision to double in size during the coming six years. As of now, this technology is proving to be a vital component in many modern solutions for automation.

Several factors contribute to the increasing adoption of this technology in manufacturing. While there is greater demand for automation solutions as the industry grapples with labor shortages, the cost of automation has decreased tremendously—sensors, cameras, robotics, and processing power are now substantially cheaper—enabling greater deployment.

Technological performance has also jumped up a notch higher, and machine vision systems now have the ability to process substantial amounts of information within a fraction of a second. Finally, machine learning algorithms and advanced artificial intelligence are transforming the data collected by machine vision even more versatile, allowing manufacturers to better realize the power from those solutions. Incorporated into automation solutions, machine vision is now producing better outcomes.

The vision system of a machine is basically made up of a number of disparate parts. These include the camera, lenses, sources of lighting, robotic movements, processing computers, application-specific software, and artificial intelligence algorithms.

While the camera forms the eyes of the system, machine vision can have several types of cameras depending on the application’s needs. An automated solution may have various cameras with different configurations.

For instance, there can be static cameras, for placing in fixed positions. These usually have a more bird’s eye view of the process, useful in applications where speed is imperative. On the other hand, dynamic cameras placed on the end of robotic arms can come much closer to the process, resulting in much higher accuracy and detailed capture.

Another important aspect of the vision system is its computing power. This is the brain of the system that helps the eyes (cameras) to do their work. Computation resources, coupled with machine learning algorithms, must not be confused with traditional machine vision applications. Companies offering machine vision capability also offer software libraries for implementation.

While manufacturers design their systems specifically for application users, others offer them targeted toward software programmers. Ultimately, the software provides the machine vision system with advanced capabilities offering a dramatic impact for manufacturers. Programs are available for control of tasks along with the ability to provide feedback from the line with valuable insights.

Machine vision-guided systems are gaining steam as a concept for replacing basic human capabilities. For instance, machine vision for assembly lines enables an increasing range of processes and applications.

Typical applications of machine vision include assembly processes for power tools, medical equipment, home appliances, and industrial assembly lines. Most assembly steps in the fabrication of electronic equipment can benefit from the use of machine vision, as it offers a substantial increase in the level of precision achieved.

For instance, machine vision improves inspection of component placement of tiny surface mount components on printed circuit boards before they go for soldering. It improves the line throughput, while not succumbing to fatigue as a human inspector would.

Magnetic Position Sensing in Robots

Robots often operate both autonomously and alongside humans. They greatly benefit the industrial and manufacturing sectors with their accuracy, efficiency, and convenience. By monitoring motor positions at all times, it is possible to maintain not only system control but also prevent unintentional motion, as this can cause system damage or bodily harm.

Such monitoring of motor positioning is possible to implement by contactless angle encoding. It requires a magnet mounted on the motor shaft and provides an input for a magnetic encoder. As dirt and grime do not influence the magnetic field, integrating such an arrangement onto the motor provides a compact solution. As the encoder tracking the rotating magnet provides sinusoidal and 90-degree out-of-phase components, their relationships offer quick calculations of the angular position.

As the magnet rotates on the motor shaft, many magnetic encoding technologies can offer the same end effect. For instance, Hall-effect and magnetoresistance sensors can detect the changing magnetic field. 3D linear Hall effect sensors can help with calculating angular positions, while at the same time, also offering compensations for temperature drift, device sensitivity, offset, and unbalanced input magnitudes.

Apart from signal-chain errors, the rotation of the magnet also depends on mechanical tolerances. This also determines the quality of detection of the magnetic field. A final calibration process is necessary to achieve optimal performance, which means either harmonic approximation or multipoint linearization. With calibration against mechanical error sources, it is possible for magnetic encoding to achieve high accuracy.

The driving motor may connect directly to the load, through a gearbox for increasing the applied torque, through a rack and pinion, or use a belt and screw drive for transferring energy elsewhere. As the motor shaft spins, it transfers the kinetic energy to change the mechanical position somewhere in the system. In each case, the angle of the motor shaft correlates directly to the position of the moving parts of the system. When the turns ratio is different from one, it is also necessary to track the motor rotations.

Sensorless motor controls and stepper motors do not offer feedback for the absolute position. Rather, they offer an estimate of the position on the basis of the relative change from the starting position. When there is a loss of power, it is necessary to determine the actual motor position through alternate means.

Although it is possible to obtain the highest positional accuracy through the use of optical encoders, these often require bulky enclosures for protecting the aperture and sensor from contaminants like dirt and dust. Also, it is necessary to couple the mechanical elements to the motor shaft. If the rotational speed exceeds the mechanical rating of the encoder, it can lead to irreparable damage.

No mechanical coupling is necessary in the case of magnetically sensed technologies like magnetoresistive and Hall-effect sensors, as they use a magnet mounted on the motor shaft. The permanent magnet has a magnetic field that permeates the surrounding area, allowing a wide range of freedom for placing the sensor.

How Good are Cobots at Welding?

The manufacturing industry has been using robots widely for several years as a replacement for the human laborer. Recent advances in this field are the Cobots or collaborative robots. They are called collaborative as their design makes them work alongside an individual as a part of a team rather than replacing the humans.

Cobots are good at operations and activities that cannot be fully automated. However, the process speed does not improve for activities such as workers ferrying parts backwards and forwards between themselves on the assembly line with the robots locked away in cages.

Manufacturers such as Ford are already on the cobot bandwagon, and the new robots could transform the way the industry works. The Ford factory has collaborative robots installing shock absorbers on vehicles on the production line along with humans. The cobots work with accuracy and precision, boosting the human productivity, while saving them valuable time and money.

At present, the industry uses four main types of cobots. They are the Safety Monitored Stop, Speed and Separation Monitoring, Hand Guiding, and Power and Force Limiting.

The Safety Monitored Stop is a collaborative feature used when the cobot works on its own, but sometimes needing assistance from an operator. For instance, in an automated assembly process, the worker may need to step in and perform an operation on a part that the cobot is holding. As soon as the cobot senses the presence of the human within its workspace, it will cease all motion until the worker leaves the predetermined safety zone. The cobot resumes its activities only after receiving a signal from its operator.

Speed and Separation Monitoring is similar to the Safety Monitored Stop, with the cobot operating in a predetermined safety zone. However, this cobot’s reaction to humans is different. The Cobot will not automatically stop because of the human presence, but will slow down until its vision detection system informs it of the location of the person or object. The Cobot stops only if the person is within a predetermined area, and waits for the proximity to increase before resuming its operations. This cobot is useful in areas with several workers are present, at it requires far fewer human interventions.

Although a Hand Guiding cobot works just as a regular industrial robot does, it has additional pressure sensors on its arm device. The operator can therefore teach the cobot to hold an object hard enough and to move it fast enough without damaging the object, while securely working with it. Production lines that handle delicate components find Hand Guide cobots very useful for careful assembly.

Power and Force Limiting cobots are among the most worker-friendly machines. They can sense unnatural forces in their path, such as humans or similar objects. Their joints are programmed to stop all movement at such encounters, and even reverse the movement.

As many skilled workers retire, and replacements are rare, the American Welding Society is working with Universal Robots, to produce a new attachment to their UR+ line of cobots with welding capabilities. The robot moves along the start and stop path of the desired weld, and welds only the specified stitch areas.

Farming With Drones & Robots

According to Heidi Johnson, crops and soil agent for Dane County, Wisconsin, “Farmers are the ultimate “innovative tinkerers”.” Farming, through the ages, has undergone vast changes. Although in developing worlds, you will still find stereotype farmers planting his seeds and praying for rain and good weather while waiting for his crops to grow, farm technology has progressed. Therefore, we now have twenty-four hour farming and driverless combines and autonomous tractors have moved out of agro-science fiction. Farmers now are good at developing things that are close to what they need.

For example, the Farm Tech Days Show has farmers discussing technology ranging from the latest sensors to cloud processing for optimizing their yield and robotics that can improve manual tasks. Most farmers are already aware of data analytics, cloud services, molecular science, robotics, drones and climate change among other technological jargon. The latest buzz in the agricultural sector is about managing farms that are not a single field, but distributed in multiple small units. This requires advanced mapping and GPS for tailoring daily activities such as the amount of water and fertilizer that each plant needs.

That naturally leads to observation, measurements and responding in real time. Because such precision farming means technological backup, with data being the crux of the issue to respond to what is actually happening in the field. A farmer would always like to know when his plants are suffering and the cause of their suffering.

For example, farmers want sensors that can tell them about the nutrient levels in the soil at a more granular level – potassium, phosphorus and nitrogen, etc. They also want to know how fast the plant is taking up such nutrients – the flow rate. This information must come in real time from sensors and there must be diagnostic tools to make sense of the data.

Although NIFA, the National Institute of Food and Agriculture were talking about the Internet of Ag Things, the concept is not new to farmers. In fact, farmers are already collecting information from both air and ground. They are doing this by flying drones, inserting moisture sensors into ground and placing crop sensors in machines when spraying and applying fertilizers.

Presently, what farmers are lacking is a cost effective, adequate broadband connection. Although Internet connectivity exists even in remote areas, thanks to satellite linkages, these are not cost effective to the farmer, as they have to deal with increasing amounts of data flow.

The current method farmers use is to collect data from the field on an SD card or thumb drive and plug it into their home computers. They transfer this data for analysis to services where crop consultants or co-operative experts are available. The entire process of turnaround takes a few days.

What farmers need is end-node farming equipment with the necessary computing power. This could help with processing and editing the raw data and sending only the relevant part direct to a cloud service. This requires an automated process and a real-time operation. With farms getting bigger, farmers need to cover much more acreage, while dealing with labor shortage and boosting yields in their farms.

The GoPiGo Robot Kit for the Raspberry Pi

Making a robot work with the tiny computer Raspberry Pi or RBPi has never been so easy. If you use the RBPi robot kit GoPiGo, all you will need is a small screwdriver with a Phillips head. The GoPiGo kit comes in a box that contains a battery box for eight or 6 AA batteries, two bags of hardware, two bags of acrylic parts, two motors, the GoPiGo board and a pair of wheels. For assembling all this into a working robot, follow these step-by-step instructions.

You start with the biggest acrylic part in the kit, the body plate or the chassis of the GoPiGo. Lay the plate on the GoPiGo circuit board and align the two holes with those on the circuit board. Place two short hex spacers in the holes below the body plate to make sure of which way is the upper side.

Next, you must attach the motors to the chassis. Use the four acrylic Ts in the kit for attaching two motors. Do not over tighten the bolts while attaching the motors, as this may crack the acrylic.

With the motors in place, it is time to attach the two encoders, one for each motor. These encoders fit on the inside of the motors and poke through the acrylic chassis of the GoPiGo. Encoders are an important part, providing feedback on speed and direction of rotation of the motor. If the encoders will not stay on, use blue ticky tacky to make them stay.

Now it is time to attach the GoPiGo board to the chassis. Place the GoPiGo board on the spacers and line its holes with the holes in the board before holding them together with screws. Two hex supports on the back of the GoPiGo board allow you to attach the castor wheel.

That brings us to attaching the wheels to the GoPiGo. You must do this gently, backing the wheels so they do not touch or rub against the screws. The battery box comes next, to be placed as far back on the chassis as possible. This gives it extra space and prevents the box from hitting the SD card on the RBPi.

This completes the mechanical assembly of the GoPiGo robot and only the RBPi remains to be attached. Locate the black plastic female connector on the GoPiGo and slide the GPIO pins of the RBPi into this connector. The RBPi remains protected by a protected plate or a canopy that has to be attached by screwing it on to the chassis.

Make the electrical connections according to the instructions. Be careful while flashing the GoPiGo hardware and leave the motors unconnected during the flashing. After connecting the GoPiGo for the first time, if you find any motor running backwards, simply reverse its connector.

GoPiGo comes with an ATMega 328 micro-controller, operating on 7-12VDC. SN7544 ICs handle the motor control part, which has two optical encoders using 18 pulse counts per rotation and a wheel diameter of 65 mm. External interfaces include single ports of I2C, Serial, analog and digital/PWM. The idling current consumed is about 3-500 mA, and full load current is 800 mA – 2A with both the motors, the servo and the camera running with the RBPi model B+.