‘Robotics’ and ‘Machine Vision’ are both time-honored research fields. Robotic courses are generally offered by departments such as mechanical engineering, automation, and system control engineering, while machine vision courses are offered by information engineering and electrical engineering departments. Through the cooperation of experts from these two fields, robots are given the ability to ‘see’ and have visual perception. This is why robot vision system is a technology that relies heavily on integrated engineering. Robot vision is designed to detect human beings and objects in the environment by calculating its position on the camera coordinates system, and converting the coordinates system of the robotic arm, then driving its motor and elbow joint to operate a task. This seemingly simple process greatly rests on complicated computer calculations. In this article, we will address the difficulties of integrating robotic arms with machine vision.
3 Types Of Hand-Eye Relationship
Traditional robotic arm programming enables the arms to perform the same action by the moving points of multiple arms. Since the points are fixed teaching points, a large number of jigs are required to fix the work pieces or peripheral processing machinery, with a lower elasticity. All points must be reset again if the relationship between the arms and work area is changed due to external force. If machine vision is integrated with robotic arms, the moving position of the arms can be elastically corrected by taking advantage of visual recognition and compensation. This effectively reduces the jigs requires, and increases the flexibility of handling diverse and multi-posture work pieces.
The spatial relationship between a robotic arm and camera is also known as the hand-eye relationship, categorized into Eye-in-Hand, Eye-to-Hand, and Upward-looking. Eye-in-Hand means that the camera is positioned on the end axis of the arm. After the camera is used for taking pictures and visual recognition, the arm can be driven to clamp the work piece. Eye-to-Hand means that the camera and arm are fixed separately. The advantage of this approach is that the robotic arm can move at the same time when the camera is capturing images, resulting in a better cycle time. However, a disadvantage is that a fixed connection between the arm and the camera must be maintained. If there are changes to this connection, re-calibration is required. As for the upward-looking relationship, also known as secondary positioning, after a work piece is gripped by the arm which comes into sight of the camera, the difference between the current posture and the standard posture is compared for further calculation and adjustments. The Upward-looking is more accurate in terms of positioning when compared to the Eye-in-Hand and Eye-to-Hand.
|Eye-in-Hand||The camera is fixed on the arm without additional fixtures|
The camera can be moved with the arm to increase the flexibility of the photo point
|The arm has to a stop while taking pictures|
Cycle time is longer
|AVG can be equipped with an arm to perform tasks such as handling and inspection
|Eye-to-Hand||The camera and arm can be moved separately|
Cycle time is shorter
|The relative connection between the robotic arm seat and the camera must be fixed, if changed, it must be corrected||Pallet blanking
Assembly line tracking
|Upward-Looking||Mostly used for secondary positioning after retrieving, highly precise||The arm has to a stop while taking pictures|
Cycle time is longer
|High-precision assembly, such as panels, and mobile phones
Barcode reading at the bottom or sides of an object
Difficulties Of Robot Vision Camera Integration
The integration of robot arms with machine vision is not an easy task considering the current industrial development. If the end customer does not have engineering capabilities to a certain degree, then assistance from professional system integrator will be necessary. When assessing the feasibility of automation needs, priorities will be given to precision and cycle time. Sufficient precision can ensure the accuracy of each process, and the expected production cycle can be used to evaluate the production capacity, and calculate the return on investment (ROI). In terms of accuracy, if a target is positioned via vision, there are many factors that will affect the overall accuracy, including camera resolution, positioning algorithm, correction errors of hand-eye relationship, correction errors of a camera lens, arm repeating accuracy, absolute accuracy, and etc., which may only be effectively evaluated by experienced technicians of robot vision.
When choosing for a suitable robotic arm, your system integrator would first take into consideration of the arm length and payload. The arm length can ensure an effective working range, while the payload must be suitable with the end effectors and work piece. As for visual solutions, there is a wide range of options. A ‘vision controller’ is often selected to satisfy the demand for multiple cameras, and relieve the heavy burden of computing. It is essentially an industrial computer in terms of hardware, designed to support two to four industrial cameras with built-in image identification software, allowing for programming by users tailed to the visual identification problems to be solved. Another type of solution is the ‘smart camera’, an embedded computing platform containing CCD / CMOS sensors, which allows users to select the appropriate lens for their work. This platform also supports vision processing software, with computing performance inferior to vision controllers, commonly applied for barcode reading or positioning. Lastly, some system integrators also integrate commercial or free visual function libraries to develop special software to be more cost-effective or increase flexibility.
New Trend: Cobot With Integrated Vision
Traditional arms must be gated by fences, light barriers, and other protective devices to ensure the safety of personnel, which takes up a large working space, and result in higher construction costs. In addition to safety, collaborative robots are also easier to hand-taught and program, lowering the learning barriers for users. Also, some collaborative robot brands, such as Techman Robot, offer cobots that are already integrated with vision as a standard robot product available for sale. Users only need to use single software to complete the arm movement and visual process programming. This greatly cuts cost originally incurred for integrating the robot with vision, and effectively reducing the time for system adjustment.
Thinking forward, with the improvement of visual sensing technology and the rapid development of artificial intelligence (AI), the image information captured by cameras can be upgraded from 2D to 3D and even RGB-D, containing richer color and geometric information. The improvement of identification ability via AI facilitates cameras to more effectively solve the variations in posture, distance, and shape of objects. In the future, there will be robot vision AI that is more capable of sensing the environment and understanding users. The future is promising and worth the wait.
**Original article appeared on TechMan Robot.