I am a researcher and project manager at Tecnalia, Donostia, in the Spanish basque country. I work in the Medical Robotics group, from the Health Division, as well as in the Advanced Manufacturing group of the Industry and Transport Division. I am involved in the development of technological solutions for physical Human Robot interaction, vision-based robotic manipulation, … I am also very interested in software architecture, within (or without) the ROS framework.
PhD in Computer Science, 2004
Université de Rennes I
Master of Research in Image and Artificial Intelligence, 2001
Université de Rennes I
Engineer Degree in Computer Science, 2001
INSA of Rennes
(2021-2025)
(2018-2021)
(2018-2021)
(2017-2020)
(2015-2018)
(2012-2015)
(2011-2015)
Assistive Robotics (2010-2013)
(2009)
Vision-based wheelchair control (2008-2009)
Work conducted at IRISA (2001-2006)
a robotic butler for injured people (2006-2008)
Deep learning methods have been successfully applied to image processing, mainly using 2D vision sensors. Recently, the rise of depth cameras and other similar 3D sensors has opened the field for new perception techniques. Nevertheless, 3D convolutional neural networks perform slightly worse than other 3D deep learning methods, and even worse than their 2D version. In this paper, we propose to improve 3D deep learning results by transferring the pretrained weights learned in 2D networks to their corresponding 3D version. Using an industrial object recognition context, we have analyzed different combinations of 3D convolutional networks (VGG16, ResNet, Inception ResNet, and EfficientNet), comparing the recognition accuracy. The highest accuracy is obtained with EfficientNetB0 using extrusion with an accuracy of 0.9217, which gives comparable results to state-of-the art methods. We also observed that the transfer approach enabled to improve the accuracy of the Inception ResNet 3D version up to 18% with respect to the score of the 3D approach alone.
This study aims to evaluate different combinations of features and algorithms to be used in the control of a prosthetic hand wherein both the configuration of the fingers and the gripping forces can be controlled. This requires identifying machine learning algorithms and feature sets to detect both intended force variation and hand gestures in EMG signals recorded from upper-limb amputees. However, despite the decades of research into pattern recognition techniques, each new problem requires researchers to find a suitable classification algorithm, as there is no such thing as a universal ’best’ solution. Consideration of different techniques and data representation represents a fundamental practice in order to achieve maximally effective results. To this end, we employ a publicly-available database recorded from amputees to evaluate different combinations of features and classifiers. Analysis of data from 9 different individuals shows that both for classic features and for time-dependent power spectrum descriptors (TD-PSD) the proposed logarithmically scaled version of the current window plus previous window achieves the highest classification accuracy. Using linear discriminant analysis (LDA) as a classifier and applying a majority-voting strategy to stabilize the individual window classification, we obtain 88% accuracy with classic features and 89% with TD-PSD features.
In the European project EUROBENCH, we are developing a framework for benchmarking the performances of bipedal systems: from humans to humanoids through wearable robots. Fair benchmarking requires defining and sharing clear and complete protocols so that bipedal systems can be studied and compared within similar and reproducible conditions. Even if the experimental methods and system comparisons are common scientific tasks, the description of the experimental protocols that are followed are rarely complete enough to allow it to be replicated. We list, in this article, the information required to properly define a protocol (e.g. experiment objectives, testbeds, type of collected and processed data, performance indicators used to score and compare experiments). Agreeing on a common terminology for benchmarking concepts will ease the evaluation of new technologies and promote communication between the different stakeholders involved in the development and use of bipedal systems
This article deals with the 2D image-based recognition of industrial parts. Methods based on histograms are well known and widely used, but it is hard to find the best combination of histograms, most distinctive for instance, for each situation and without a high user expertise. We proposed a descriptor subset selection technique that automatically selects the most appropriate descriptor combination, and that outperforms approach involving single descriptors. We have considered both backward and forward mechanisms. Furthermore, to recognize the industrial parts a supervised classification is used with the global descriptors as predictors. Several class approaches are compared. Given our application, the best results are obtained with the Support Vector Machine with a combination of descriptors increasing the F1 by 0.031 with respect to the best descriptor
This paper is related to the observation of human operator manipulating objects for teaching a robot to reproduce the action. Assuming the robotic system is equipped with basic manipulation skills, we focus here on the automatic segmentation of the observed manipulation, for extracting the relevant key frames in which the manipulation is best described. The segmentation method proposed is based on the instantaneous work, and presents the advantage of not depending on the force and pose sensing locations. The experimentations concern two different manipulation skills, sliding and folding. We demonstrate in different settings that such segmentation method is efficient.