Visual servoing

Research activity at IRISA (2001-2006)

My research performed at IRISA was involved within French national projects Predit Mobivip and Robea Bodega. During these years in Bretagne, I was member of the research projects Lagadic and Texmex.

Robotic navigation

This work concerns a topological approach of vision-based navigation. Within this framework, the navigation space is described by an image data-base acquired off-line. The localization of the robotic system is performed by using image retrieval methods. Indeed, as long as the current position of the robotic system is defined by the image acquired by the en-boarded camera, the localization is nothing but the search of the views from the database that are the most similar in term of content. The database is organized in a connectivity graph which gives a nice solution for defining a path between two images respectively describing the initial and desired positions, by performing a shortest path search. Original visual servoing methods are then proposed to control in-line the robot motions, by comparing the current view and the image path.

This work has been integrated onto the robotic car Cycab that belongs to IRISA. These experiments in real environments have shown the efficiency of the approach, and also its robustness towards illumination changes, local signal disturbance, visual sensors. It did enabled also to shed the light on critical parts that should be investigated for improving the current version.
Work in collaboration with Sinisa Segvic, Albert Diosi, Patrick Gros and François Chaumette.

Landmarks management and tracking during navigation

Developing an autonomous vision-based navigation system imposes to deal with the recognition of some visual information than can be extracted from the views acquired by the camera. These landmarks have also to be put in relation with the ones detected onto the image path. The use of a differential tracker manages to estimate within each acquired view the position of landmarks (Harris points) that were visible in the previous image. We are currently working on the improvement of such trackers to suit well to mobile vehicle characteristics (some results here).

During the navigation, the environment observed by the camera continuously change. Therefor, the problem of automatic update of visual landmarks must be considered. A method based on image transfer is developed for detecting the apparition of landmarks from the image path within the camera field of view. Within this framework, no 3D model is needed, since a local reconstruction is sufficient. This approach has been tested and validated during the real experiments performed with the Cycab.
Work in collaboration with François Chaumette and Sinisa Segvic .

Related videos

These videos illustrate navigation in 3D environment. These simulations enable to study the control behavior, skipping the tracking step.

On the first video, 5 degrees of freedom of the camera are controlled (rotations around the optical axis are not considered). On the second video, the camera moves on a plan, like a holonomous mobile moving in a corridor.

This video illustrates a vision-based navigation task. The environment is here planar, and the camera is mounted onto the effector of a Cartesian arm.

An image data-base is used to describe the environment that should observe the camera during its navigation. The features matched between each consecutive couple of images are used to control the motion of the robotic system (on the video the indexes displayed correspond to the couple of images from the path where these features were initially extracted and matched).

All the displayed crosses are tracked in real-time with an exhaustive correlation. The green features are the one used to control the robotic arm. A planar homography is used to estimate the position of new features entering the camera field of view.

At the end of the sequence, a classical image-based visual servoing is performed. The blue crosses correspond to the final desired point positions.

These two videos illustrate my first work on the extension of classical visual servoing for performing large displacements.

An image path (obtained by image retrieval and shortest path finding) describes the visual environment that should observe the camera during the navigation. At each instant, an image-based visual servoing is used to make the visual features converge toward their positions observed in the next image of the path.

Here, we use the fact that the scene is planar. An homography is estimated between the current a nd desired positions of the features. This homography is used to project onto the current image plane the features matched with next image of the path. When enough are visible, the current servoing is stopped, and the next one is started. Thus, the camera does not converge toward each image of the path.

In the second video, a path planning is performed at each beginning of visual servoing. It enables to estimate for each instant the best image position of the features. By using these positions as the desired features, one gets a constant velocity visual servoing.

The red points are the points tracked. Blue points are the desired positions of these points. Green points are the desired positions of points not yet visible. In the second video, the blue points are the desired positions estimated by path planning.

Realized on the robotic arm Afma6 from the Lagadic project, IRISA.

With: Patrick Gros , François Chaumette and Youcef Mezouar

Qualitative visual servoing

An approach derived from classical visual servoing, called qualitative visual servoing, is proposed to control the motion of a robotic system. The main property of this control law is that it does not require visual features to converge exactly towards a desired value, but rather towards a confident interval. We are currently studying the theoretical properties of this original scheme, and through this work, considering the behavior of control laws that use a weigthing matrix to activate or inactivate visual features.

As an introduction, the qualitative visual servoing has been presented and firstly used for controlling the visibility of points during a positioning task. Further works engaged are investigating the other applications that could benefit from this concept.

This video presents an application of the qualitative servoing. Qualitative features are used to define a visibility constraint during a positioning task by visual servoing.

The qualitative visual servoing relies on the use of a weighting matrix for activating or inactivating features within the control law. The weight is obtained with a function that realizes a continuous heavyside transition between 0 (total inactivation) and 1 (full activation).

In these two experiments, a qualitative feature is associated to each point tracked and used within the control law. This qualitative features corresponds to the distance between the considered point and a confident area defined within the image plane. This feature is activated only when the point is close to the confident are borders. The additive constraint makes the camera modify its motion to keep the point inside its field of view.

Both videos present the trajectory of the camera with and without the qualitative features. If no visibility constraint is used, some features gets out the camera field of view in the first video. In the second video, the positioning tasks fails (classical problem of large rotation around the optical axis). By considering the visibility constraint with qualitative features, no point is lost in the first experiment, and the second one succeeds.

Work in collaboration with Nicolas Mansard and François Chaumette.

Anthony Remazeilles
Anthony Remazeilles
Senior researcher

Interested in any application involving Human Robot interaction, software architecture or ROS.