Presentation is loading. Please wait.

Presentation is loading. Please wait.

Robotics & Machine Intelligence James Painter Computer Engineering - Elizabethtown College 08 Dr. Joseph Wunderlich - Project Advisor Dr. Troy McBride.

Similar presentations


Presentation on theme: "Robotics & Machine Intelligence James Painter Computer Engineering - Elizabethtown College 08 Dr. Joseph Wunderlich - Project Advisor Dr. Troy McBride."— Presentation transcript:

1 Robotics & Machine Intelligence James Painter Computer Engineering - Elizabethtown College 08 Dr. Joseph Wunderlich - Project Advisor Dr. Troy McBride - Project Consultant Objective Fully develop the vision system for the Wunderbot IV autonomous robot and adapt it specifically for the June 2008 Intelligent Ground Vehicle Competition (IGVC). This will entail the construction of a sturdy camera mount with appropriate viewing angle, image processing with line parsing, and intelligent motor control. Camera Mount The Wunderbot camera mount is part of the new utility pole designed to hold the GPS, digital compass, and wireless router antenna. Atop the pole and set back about 16 inches is an angled metal bracket, onto which the camera is bolted and secured with wing nuts. The wing nuts allow easy fine-tune adjustments to the mounted angle. The position of the camera on the utility pole was an important consideration in regard to the resulting viewable region in front of the robot. As the downward angle of the camera is increased, the depth of the view is decreased, and vice versa. However, as the camera is tilted up, the area directly in front of the robot – a crucial region for avoiding immediate obstacles – drops out of view. Raising the camera higher and moving it farther back (Figs. 1 & 2) has similar consequences on the viewable region. Camera angle analysis was performed to find the most ideal configuration. Figures 1 & 2: Viewable region directly in front of robot using (1) vertical utility pole and (2) utility pole angled back at ~75 degrees; meter sticks arranged on floor directly in front of wheels for depth measurement As can be seen in Figs. 1 and 2, by moving the camera a distance behind the robots rear bumper, a range of view can be established that is about 85cm deeper than having the camera directly above the bumper. Also note the loss of view of about 25cm in front of the vehicle when the depth of view is extended. Another consideration is the ability to crop edge regions out of the image for faster acquisition and subsequent image processing. A larger field of view allows more unnecessary area in the image to be removed. The table below shows the elapsed time for processing images of different sizes. By these results, it was decided that moving the camera back would yield significant improvements in line detection, for both time factors and the ability of the robot to detect lines a greater distance ahead on the path. The figure below shows the change in the cameras viewable region when it is mounted above the rear bumper (blue triangle) vs. mounted behind the bumper (dashed lines). Top Edge CroppedProcessing Time Speedup 15% (153 lines)16% (90ms) 24% (246 lines)25% (140ms) Image Processing Image processing is performed in the cameras proprietary software, DVT Intellect. First, a dilate filter is applied, using a 3x3 kernel. Next, a Hough Transform line detection algorithm is used. Multiple thresholds are applied in order to filter out noise and extraneous small objects, such as leaves, dirt patches, and chipmunks. Among those was a line thickness sensor, which first scans the image for 75% intensity contrast between neighboring pixels. The first three chains of 50 pixels that satisfy this requirement are accepted as lines. (Alternatively, all chains can be found and those with highest contrast accepted, but this process takes about three times as long.) Once lines have been established, they are only considered valid if they have a separation of less than 300 pixels. They must also have a straightness of less than a 75-pixel maximum deviation from the averaged center line. These qualifiers help eliminate large detected regions like shadows. Motor Control Once line position data has been found, the camera sends it to the on-board PC via wired TCP/IP network communication. LabVIEW is used to record the lines on a global map and to control the robots motion accordingly. The location of the lines, in terms of both depth (distance away) and lateral position, is considered in order to calculate the sharpness of turn and possibly backup. The image above shows a screenshot taken during an IGVC simulation. Image processing resulted in successful detection of the dashed white lines shown. The fine yellow lines were auto-generated and were then used to guide the robot toward the left. Controls were designed to adjust numerous parameters, such as overall target speed, backup speed, proximity for backing up, turning aggressiveness (factoring depth and lateral position separately), and minimum line width.


Download ppt "Robotics & Machine Intelligence James Painter Computer Engineering - Elizabethtown College 08 Dr. Joseph Wunderlich - Project Advisor Dr. Troy McBride."

Similar presentations


Ads by Google