Home >> Privacy

Machine Vision Fundamentals: How to Make Robots ‘See’

advertisement:

Machine vision combines a range of technologies to provide useful outputs from the acquisition and analysis of images. Used primarily for inspection and robot guidance, the process must be done reliably enough for industrial automation. This article provides an introduction of how today’s machine vision technology guides basic robotic functions.

Figure 1. Gray scale image.
Figure 1. Gray scale image.
Let’s go through a simple example of what happens during robot guidance. Take, for example, a stationary mounted camera, a planar work surface, and a screwdriver that must be grasped by the robot. The screwdriver may be lying flat on that surface and mixed amongst, but not covered by, other items. The key steps executed during each cycle include:

  1. Acquire a suitable image.
  2. “Find” the object of interest (the overall screwdriver, or the piece of it that must be grabbed.)
  3. Determine the object’s position and orientation.
  4. Translate this location to the robot’s co-ordinate system.
  5. Send the information to the robot.
  6. Using that information, the robot can then move to the proper position and orientation to grasp the object in a prescribed way.

While the machine vision portion (steps #1 through #5) may appear lengthy when explained, the entire sequence is usually executed within a few hundredths of a second.

#1 Acquire a suitable image: Several machine vision tools are described below. Each of these software program components operates on an image and requires differentiation to “see” an object. This differentiation may be light vs. dark, color contrast, height (in 3D imaging), or transitions at edges. Note: It’s important to confirm or design a geometric solution so that lighting creates reliable differentiation.

The choices of imaging methods vary fundamentally. The most common are gray scale and color versions of area scan imaging, which simply means a conventional picture taken and processed all at once. Less common options are line scan imaging, where the image is built during motion, one line at a time, and 3D profiling, where the third dimension of an image (“Z”) is coded into the value of each pixel of the figure.

Figure 2. Enhanced image, with found templates marked with yellow rectangles.
Figure 2. Enhanced image, with found templates marked with yellow rectangles.
Points on a plane of interest vary in their distance from the camera, changing their apparent size; this issue is accentuated when the camera aim is not perpendicular to the surface. Optics may introduce barrel or pincushion distortion. Barrel distortion bulges lines outward in the center, like the lines or staves on a wooden barrel; pincushion does the opposite. A distortion correction tool is often used to remove these flaws. During a “teaching” stage, a known accurate array (such as a rectangular grid of dots) is placed at the plane of interest. The tool views the (distorted) image, and determines the image transformation required to correct it. During the “run” phase, this transformation is executed on each image.

#2 Find the object of interest: “Finding” the object requires creating a distinction between the object of interest and everything else that is in the field of view, including the background (such as a conveyor) or other objects. Here are some common methods:



>> Newsletter

Subscribe today to receive the INSIDER, a FREE e-mail newsletter from NASA Tech Briefs featuring exclusive previews of upcoming articles, late breaking NASA and industry news, hot products and design ideas, links to online resources, and much more.

Sign up now >>