I am a researcher and project manager at Tecnalia, Donostia, in the Spanish basque country. I work in the Medical Robotics group, from the Health Division, as well as in the Advanced Manufacturing group of the Industry and Transport Division. I am involved in the development of technological solutions for physical Human Robot interaction, vision-based robotic manipulation, … I am also very interested in software architecture, within (or without) the ROS framework.
PhD in Computer Science, 2004
Université de Rennes I
Master of Research in Image and Artificial Intelligence, 2001
Université de Rennes I
Engineer Degree in Computer Science, 2001
INSA of Rennes
(2021-2025)
(2018-2021)
(2018-2021)
(2017-2020)
(2015-2018)
(2012-2015)
(2011-2015)
Assistive Robotics (2010-2013)
(2009)
Vision-based wheelchair control (2008-2009)
Work conducted at IRISA (2001-2006)
a robotic butler for injured people (2006-2008)
Deep learning methods have revolutionized computer vision since the appearance of AlexNet in 2012. Nevertheless, 6 degrees of freedom pose estimation is still a difficult task to perform precisely. Therefore, we propose 2 ensemble techniques to refine poses from different deep learning 6DoF pose estimation models. The first technique, merge ensemble, combines the outputs of the base models geometrically. In the second, stacked generalization, a machine learning model is trained using the outputs of the base models and outputs the refined pose. The merge method improves the performance of the base models on LMO and YCB-V datasets and performs better on the pose estimation task than the stacking strategy.
In this paper, an innovative algorithm for averaging a set of multivariate time series with different lengths based on Constrained Dynamic Time Warping (CDTW) is proposed. This approach relies on the CDTW to provide the non-linear alignment of the multivariate time series, and employs the proposed Minimum Cost Averaging (MCA) technique to identify the optimum matches and get equal-length time series. MCA-CDTW is a task-agnostic approach that after selecting a reference curve, transforms the rest of the demonstrations in the set to obtain new curves that are time-aligned with the reference. From these transformed curves, not only the mean but also the signal variability can be directly extracted. This technique provides smooth mean curves even when there are large deviations between the demonstrations in the set, and still the complexity of the proposed algorithm is significantly reduced compared to other averaging techniques from the literature. When learning techniques are used to teach a motion to a robotic system, obtaining smooth trajectories is important to achieve good robotic behaviors. The new algorithm MCA-CDTW is tested and compared on two different databases: a literature database where humans move a robotic arm with kinaesthetic teaching, and a set of recordings of a teleoperated robotic arm performing laboratory manipulation. On both datasets, it is demonstrated that the new approach is providing smooth average trajectories.
This article provides a detailed description of the membrane-based sterility testing process, the use case selected in the Horizon Europe project TraceBot for demonstrating the benefit of agile robotics in a laboratory setting and its interest for the pharmaceutical industry. Based on videos of human operators performing manually the sterility testing process, we detail its different steps, starting from a human perspective, with a more robotic approach in mind, and highlight the major atomic functionalities a robot should have for executing the process at hand. Together with the analysis of the process flow of actions, we also list all elements/objects involved, and analyse the technical and scientific challenges in this process and its application environment as well as in general pharmaceutical processes. Finally, with the objective to engage the community in collaborating and taking advantage of our use case study, the complete process flow description is made available on a website, together with associated data like the object’s meshes and a generic robotic setting in ROS environment.
This study describes the software methodology designed for systematic benchmarking of bipedal systems through the computation of performance indicators from data collected during an experimentation stage. Under the umbrella of the European project Eurobench, we collected approximately 30 protocols with related testbeds and scoring algorithms, aiming at characterizing the performances of humanoids, exoskeletons, and/or prosthesis under different conditions. The main challenge addressed in this study concerns the standardization of the scoring process to permit a systematic benchmark of the experiments. The complexity of this process is mainly due to the lack of consistency in how to store and organize experimental data, how to define the input and output of benchmarking algorithms, and how to implement these algorithms. We propose a simple but efficient methodology for preparing scoring algorithms, to ensure reproducibility and replicability of results. This methodology mainly constrains the interface of the software and enables the engineer to develop his/her metric in his/her favorite language. Continuous integration and deployment tools are then used to verify the replicability of the software and to generate an executable instance independent of the language through dockerization. This article presents this methodology and points at all the metrics and documentation repositories designed with this policy in Eurobench. Applying this approach to other protocols and metrics would ease the reproduction, replication, and comparison of experiments.
Deep learning methods have been successfully applied to image processing, mainly using 2D vision sensors. Recently, the rise of depth cameras and other similar 3D sensors has opened the field for new perception techniques. Nevertheless, 3D convolutional neural networks perform slightly worse than other 3D deep learning methods, and even worse than their 2D version. In this paper, we propose to improve 3D deep learning results by transferring the pretrained weights learned in 2D networks to their corresponding 3D version. Using an industrial object recognition context, we have analyzed different combinations of 3D convolutional networks (VGG16, ResNet, Inception ResNet, and EfficientNet), comparing the recognition accuracy. The highest accuracy is obtained with EfficientNetB0 using extrusion with an accuracy of 0.9217, which gives comparable results to state-of-the art methods. We also observed that the transfer approach enabled to improve the accuracy of the Inception ResNet 3D version up to 18% with respect to the score of the 3D approach alone.