Project - Palcom

2007 OTDK dokumentáció
>> English abstract
Palcom, 2006 Prezentation Videos Download

Goals

The topic of this presentation is the design, and the implementation of a robot, which is capable of autonomous navigation within a homogeneous, weakly-textured environment. Three autonomous navigation types has been defined: Obstacle avoided “free fall” navigation, line following, and object following. First, I will talk about the overall design of the system, then I’ll summarize the individual components in a bottom-up approach: the used hardware following with the controlling software.

 

The Construction

This figure shows the architectural design of the system. I will talk about the separate modules in more details later. The implementation is based on a client-server architecture: the mobile robot being the client, and the controlling computer being the server. The robot is equipped with a wireless camera, which broadcasts the signal on 1.2GHZ to the receiver. The result of the camera is relayed using a TV tuner to the main program, which controls the robot automatically using the images gained from the camera as its only input.

 

RC controlling

The base of the first robot was an RC car, where the connection between the pc, and the remote controller was done through an opto coupler. Using this architecture, we can send 9 different controlling values through the parallel port. The disadvantage of this approach was the discrete controlling: both the speed, and the direction settings were very rough. In order to achieve more precise navigation, the car was changed to a precision controlled one.

Modell RC Controlling

For the second approach, a remote controller Model RC is used as the base of the robot, which has the advantage of variable communication frequency, therefore more than one Model RC can be controlled at the same time. Another important advantage of using a Model RC is the precision controlling: both the direction and the speed can be set within a 0-40 value The PC is connected to the remote controller with an extra electronics. A PIC microcontroller seemed the best solution for achieving this, due to its possibilities, and small size.

The circuitry of the extra electronics can be found on the right figure. The remote is controlled by three impulses: The first one is generated every 15ms, this is the periodic time. The direction of the car is set by the time elapsing between the first, and the second impulses; the speed is controlled by the time elapsing between the second, and the third one.

Mobile Robots

The robots are equipped with an optical sensor, which gathers sufficient amount of information for the navigation. As the figures shows, while the height of the camera on the RC is fixed, on the Model RC it can be altered, which makes it capable of larger sight of vision. The camera is equipped with a PAL-optics, which extends the angle of view to 360 degrees.

 

The PAL-optics

Using a PAL optics, instead of a perspectivic one, has the advantage, that there is no need of moving components; and there is no need to focus to gather information from differing distances. Also, it reduces the necessary number of sensors and thus energy use as well. Another advantage is that from the image provided by this optic the direction and angle of the object can be directly computed. The PAL-optic projects the 360 degrees of the environment to 2 dimensions, thus creating a circular image. The cylindrical projection of the image can be described using a polar coordinate-system. Due to the structure of the PAL-optic, there is a blind spot in the middle of the image, which must be taken in account while processing it. The main properties of the optics are: center of the PAL, inner radius, outer radius, and the angle of field of view.

 

Controlling Software

The controlling software was created in C#, which has the advantage of rapid prototyping, and fast application development. The system is very modular; the processing of the images is done through a pipeline. The image flow arrives from the Input module using DirectX. The input can be either a test video file, or a camera. In both cases, the video is split to separate images. After separation, the module forwards them to both to the mapper, and the decision maker module. The decision maker analyzes the image, and sends a percentage value to the Navigation module. The navigation module which in turn sends the direction, and speed values to the PIC using the serial port.

Preprocessing

In order to make a valid control decision, several preprocessing methods are used. On the first image, you can see the raw image gained from the camera. Using the RGB filter, the 3 color channels are analyzed, using a minimum, and a maximum values; if the color of the pixel is within these values, then it remains intact on the resulting image; otherwise the filter will make it black. This filter can be used in high-contrast environment for a very fast image segmentation. The disadvantage of this filter, is the difficulty of segmentation for several shades, as presented on the second image. Using the HSL filter, the program segments the image to hue, saturation, and Luminance components, which enables segmentation on low-contrasts environments. As demonstrated on figure 3, the red objects can be exclusively selected. After a combination of these filters, the resulting image is binarized using the threshold filter. The result image will contain only the pixels needed for navigation.

On these slide, we demonstrate the method used for object locating. On the first figure, you can see a low-contrast image gained from the PAL-optics. The HSL filter by itself lefts a lot of noise, as illustrated on the third figure. Thus, we use the pixelate algorithm, which averages the values of a pixel square. Altough this reduces the size of the object, combined with a treshold, it cancels the noise out, as demonstrated on the fifth figure. For edge detection, the Canny algorithm was used, utilizing the SharperCV image processing library.

Decision Maker

The robot has been tested using three navigation types: Line following, object following, and “free fall” navigation (which refers to going forward to the most obstacle-free direction). One complex algorithm is used for all three of the navigation types. The algorithm determines a direction line, which converges to the center of the image, and begins r1 distance away from the center point. The optimal length of the line of sight can be set using the distance value. The algorithm determines the pixel intensity under the line of sight starting from 90 degrees (forward), scanning at each iteration first left, then right, until it reaches 90± scanning degree value. For every line of sight, it determines the sum of the pixel intensities, whether it exceeds the value set by threshold minimum; the maximum of the sum of pixel intensity values will be used as a navigation direction.

Mapper Module

Although the mapper module is being constructed at the time of this presentation, some early results are already available. One of the major milestones in this direction was the ability to convert the image received from the PAL-optics into a virtual top-view image, using a pre-defined curve, which describes the bending of the optics. These images will be used to search, and track characteristic features, which will be used to determine the location, and angle of the robot, utilizing a Kaman-filter. After localizing the robot, the module will rotate the image to reflect the initial direction of the robot; the resulting image will be melted into a global map. This global map will represent the virtual top-view of the environment in which the robot has already been.

Conclusion

In summary, the following conclusions can be drawn: An easily reproducible system was developed, which contains reusable components, thus giving a stable base for further similar research, and development in the field of robotics, and computer vision. The compiled system contains only low-cost elements, thus making the reproducing relatively easy. To improve the system on the short term, we are planning a dynamic decision-supporter module, which is capable of storing the environment where the robot has already been.

References

 








Navigation:
Home
Palcom/2006


Awards

Discussion

Contact:
Palcom Group

Individuals:
Keni
Lacc
SDr

BMF-NIK-IAR, 2004-2006