Vous êtes ici : Réunions » Réunion

Identification

Identifiant: 
Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?

Apprentissage et Robotique

Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.

Inscriptions closes à cette réunion.

Inscriptions

78 personnes membres du GdR ISIS, et 96 personnes non membres du GdR, sont inscrits à cette réunion.
Capacité de la salle : 200 personnes.

Annonce

Réunion "Apprentissage et Robotique"

Une réunion commune entre le GDR ISIS et le GDR Robotique sur le thème "Apprentissage et Robotique". Dans le contexte actuel de pandémie Covid, cette réunion sera virtuelle, elle aura lieu sur Zoom. Les identifiants seront directement envoyés aux participants inscrits, au cours de la matinée précédant la réunion.

Le but de la réunion est d'offrir l'opportunité d'échanges sous forme d'exposés de différentes communautés (robotique, apprentissage statistique, traitement de signal et des images ...) travaillant sur l'apprentissage pour les différents aspects de la robotique (perception, contrôle, navigation, boucles action / perception etc.).

La journée inclura des conférences invitées et des communications pour lesquelles nous lançons un appel à contribution sur les thèmes :

La réunion aura lieu le 22 Juin à 14h.

Elle inclura deux conférences invitées par

Nous lançons également un appel à contributions. Les résumés des propositions (1/2 page environ) devront parvenir à christian.wolf@insa-lyon.fr, david.filliat@ensta-paristech.fr, cedric.demonceaux@u-bourgogne.fr, avant le 14 Juin.

Organisation :

Programme

(Réunion virtuelle)

14h
Introduction à la journée (Cédric Demonceaux, David Filiat, Christian Wolf)

14h10
Keynote par Josef Sivic, Inria/ENS, https://www.di.ens.fr/~josef/
"Weakly supervised visual recognition: from Internet Images towards machines that see"

14h40
Keynote par Justus Piater, Universität Innsbruck, https://iis.uibk.ac.at/
"Conditional Neural Motion Primitives"

15h10 Séance "Spotlight"

Les 9 papiers suivants seront d'abord présentées sous forme de vidéos pré-enregistrées (3min par contribution). Ensuite, les participants de la journée pourront rejoindre des salles spécifiées, une salle par contribution.

- Frédéric Barbaresco
Apprentissage Machine sur les groupes de Lie pour les systèmes dynamiques rigides et articulés: modèle symplectique de Souriau de la mécanique statistique via les orbites coadjointes
- Thomas Chaffre, Julien Moras, Adrien Chan-Hon-Tong et Julien Marzat
Sim-to-Real Transfer with Incremental Environment Complexity for Reinforcement Learning of Depth-based Robot Navigation
- Amaury Depierre, Liming Chen et Emmanuel Dellandréa
Optimizing Correlated Graspability Score and Grasp Regression for Better Grasp Prediction
- Nicolas Cuperlier, Yoan Espada, Philippe Gaussier et Olivier Romain (ETIS) et Guillaume Bresson (VEDECOM)
Robotic navigation: a neurorobotic approach
- Maxime Petit, Amaury Depierre, Xiaofang Wang, Emmanuel Dellandrea et Liming Chen
Efficient Bayesian Optimization of Black-Box component for Developmental Robotics with Meta-Learning and Transfer Learning based on Long-Term Memory
- Martin Brossard, Silvère Bonnabel et Axel Barrau
Calibrer une Centrale Inertielle avec du Deep Learning
- Matthieu Grard, Emmanuel Dellandrea et Liming Chen
Learning to localize unoccluded object instances for robotic picking
- Andrea De Maio et Simon Lacroix
Simultaneously Learning Corrections and Error Models for Geometry-based Visual Odometry Methods
- Nicolas Duminy et Sao Mai Nguyen
Découverte et exploitation de la hiérarchie des tâches par motivation intrinseque
15h40 Séance "Posters/Salles individuelles", présentations individuelles des papiers en salles différentes

16h30 Fin de la journée

.

Résumés des contributions

Josef Sivic

Weakly supervised visual recognition: from Internet Images towards machines that see

The current successes in visual recognition are, in large part, due to a combination of learnable visual representations, supervised machine learning techniques and large-scale carefully annotated image collections. In this talk, I will argue that in order to build machines that understand the changing visual world around us the next challenges lie in developing visual representations that generalize to never seen before conditions and are learnable in a weakly supervised manner, i.e. from readily available but noisy and only partially annotated data. I will show examples of our work in this direction with applications in visual localization across changing conditions, finding visual correspondence and learning from instructional videos how people manipulate objects.

Justus Piater

Conditional Neural Motion Primitives

Conditional Neural Movement Primitives (CNMP) constitute a novel framework for robot programming by demonstration based on Conditional Neural Processes (CNP). Like Bayesian methods such as Gaussian Processes (GP), CNP learn how target distributions depend on data, and can be conditioned on specific data points to infer new target distributions at test time. Unlike GP that are expensive to train and scale poorly to high dimensions, CNP are neural networks and are trained by gradient descent. CNMP leverage CNP to represent motion trajectories that can be conditioned, at test time, on task paramters such as goal locations, via points, and/or force readings. Moreover, CNMP are conditioned on sensor readings during execution, resulting in robust, reactive behavior. This talk will present an overview of how CNMP work and how they can be used in various robot applications.

Andrea De Maio and Simon Lacroix

Simultaneously Learning Corrections and Error Models for Geometry-based Visual Odometry Methods

This work fosters the idea that deep learning methods can be used to complement classical visual odometry pipelines to improve their accuracy and to associate uncertainty models to their estimations.

We show that the biases inherent to the visual odometry process can be faithfully learned and compensated for, and that a learning architecture associated to a probabilistic loss function can jointly estimate a full covariance matrix of the residual errors, defining an error model capturing the heteroscedasticity of the process.

The joint learning of bias and uncertainty is beneficial as they are correlated quantities, influencing each other under a probabilistic framework, and can be used to serve precise purposes. Biases can be used to reduce estimation errors of the visual estimator, aiming at reducing its tracking error on long trajectories. Uncertainty information can be leveraged in different manners, from the fusion with multiple estimation processes to pose weighting in pose-graph optimization problems.

We present experiments on autonomous driving image sequences to demonstrate, using robust metrics, the improvement of position tracking for sparse visual odometry methods.

Matthieu Grard (Siléane, Ecole Centrale de Lyon, LIRIS)

Learning to localize unoccluded object instances for robotic picking

(IJCV)

Picking instances piled up in bulk one by one is a repetitive and tedious task required in many applications, such as car assembly, order processing or waste sorting. Yet, automating the visual localization of unoccluded instances for robotizing this task is still difficult due to the diversity of objects and occlusions, the absence of explicit object models, and the dearth of annotated images. State-of-the-art deep learning-based approaches generally split occlusion-aware instance segmentation into region-based segmentations by approximating instances as their bounding box. However, this approximation does not hold for dense object layouts. We therefore explore and compare alternative design patterns that improve the attention of deep encoder-decoder networks to unoccluded instances from a single image. We also propose synthetic images to pretrain such networks, thus easing their adaptation to novel conditions. Hopefully, the presentation should be concluded with a live demonstration of the proposed synthetically trained network.

Martin BROSSARD , Silvère BONNABEL et Axel BARRAU

Calibrer une Centrale Inertielle avec du Deep Learning

Les centrales inertielles (acronyme IMU en anglais) sont des capteurs composés d'accéléromètres et de gyroscopes. Coupleés avec des caméras ou des LiDAR, elles constituent de facto un standard pour la localisation de robots aériens et pour la réalité augmenteé sur smartphones et headsets. Cependant, les centrales inertielles bas-cou?t souffrent de nombreux défauts, les plus prépondérants étant la variance éleveé du bruit de mesure, ainsi que la présence de biais difficiles à estimer. Une utilisation robuste et efficace de ces capteurs requiert des calibrations à effectuer à la fois hors ligne et en temps reél. Nous proposons une approche baseé sur le deep learning pour débruiter et calibrer les gyromètres et accéléromètres, c?est-à- dire réduire le bruit de mesure et supprimer les biais. Notre approche se base sur: 1) la prise en compte d'un modèle détaillé des centrales inertielles bas cou?t; et 2) les réseaux de neurones basés sur des convolutions dilateés et de taille modestes, qui à l'inverse des réseaux récurrents, sont capables d'apprendre avec seulement cinq minutes de donneés. Les signaux débruités de la centrale inertielle peuvent e?tre utilisés tels quels. Appliqueé au problème d'estimation de l'orientation d'un drone, voir Figure 1, notre méthode obtient une erreur de 1.2 deg /35m (98 deg /35m sans correction) sur le jeu de donneés EuRoC, et rivalise ainsi avec les meilleures approaches baseés sur des systèmes vision-inertie, alors me?me que notre approache n?utilise pas d'information visuelle.

Maxime Petit, Amaury Depierre, Xiaofang Wang, Emmanuel Dellandrea et Liming Chen

Efficient Bayesian Optimization of Black-Box component for Developmental Robotics with Meta-Learning and Transfer Learning based on Long-Term Memory

In robotics, methods and softwares usually require optimizations of hyperparameters in order to be efficient for specific tasks, for instance industrial bin-picking from homogeneous heaps of different objects. We present a developmental cognitive architecture based on long-term memory (composed of Episodic, Semantic and Procedural memories) and reasoning modules (Bayesian Optimisation and Visual Similarity) allowing a robot to use both transfer learning and meta-learning mechanism, increasing the efficiency of such noisy, continuous, constrained and expensive parameters optimizations.

On one hand, the Transfer Learning strategy can take advantage of past experiences (stored in the episodic and procedural memories) in order to warm-start the exploration using a set of hyper-parameters previously optimized from objects similar to the new unknown one (stored in a semantic memory). On the other hand, the Meta Learning algorithm shrinks the search space by using reduced parameters bounds computed from the best optimizations realized by the robot with similar object.

As example, the system has been used to optimized 9 continuous hyper-parameters of a professional software (Kamido) both in simulation and with a real robot (industrial robotic arm Fanuc) to grasp homogeneous heap of objects. In simulation, we demonstrate the benefit of the transfer and meta learning based on visual similarity, as opposed to an amnesic learning i.e. learning from scratch all the time). The method achieves good performance, despite a very short optimization budget (either 68 or 30 iterations per run) and the TL and ML strategies consistently improves the expected score when compare to amnesic optimizations. Moreover, with the real robot, we show that the method outperforms the manual optimization from a human expert with less than 2 hours of training time to achieve more than 88% of success.

Nicolas Cuperlier, Yoan Espada, Philippe Gaussier et Olivier Romain (ETIS) et Guillaume Bresson (VEDECOM)

Robotic navigation: a neurorobotic approach

This contribution will describe part of the work carried out in navigation by the neurocybernetic team of the ETIS laboratory focusing on some recent advances in place recognition and sensory-motor control.

Following a neurorobotics approach, our work takes inspiration from the mammalian brain to design neural control architectures embedded in mobile robots. The objective of this work is twofold: firstly to improve our understanding of the neural processes involved in spatial cognition and secondly to propose new solutions to robotic navigation allowing robots to present robust and adaptive spatial behavior as observed in their biological counterpart.

We will thus present a neural architecture for the navigation of mobile robots in which localization is achieved by modeling hippocampal place cells. Based on a model of hippocampal loops, this architecture allows the fusion of allothetic (vision, direction) and idiothetic (path integration) information via the learning of hippocampal place cells (combining visual place cells and entorhinal grid cells). Linking these hippocampal place cells with action (movement) allows to learn sensori-motor elements in a proscriptive way to reproduce a given trajectory. From a robotic perspective, a major advantage of this bio-inspired model is it ability to replicate a fundamental characteristic of the hippocampus : the one-shot learning of the spatial information. Thus, at the opposite of the deep network approach, a long training on large databases is not required and the learning can occur on-line. We will show some robotic results obtained during the evaluation of this bio-inspired model in both indoor and outdoor environments as well as some preliminary results of this model when applied to a self-driving car context.

Amaury Depierre, Liming Chen et Emmanuel Dellandréa

Optimizing Correlated Graspability Score and Grasp Regression for Better Grasp Prediction

Grasping objects is one of the most important abilities to master for a robot in order to interact with its environment. Current state-of-the-art methods rely on deep neural networks trained to jointly predict a graspability score together with a regression of an offset with respect to grasp reference parameters. How- ever, these two predictions are performed independently, which can lead to a decrease of the graspability score when applying the predicted offset. There- fore, we extend a state-of-the-art neural network with a scorer which evaluates the graspability of a given position and introduce a novel loss function which correlates regression of grasp parameters with graspability score. We show that this novel architecture improves the performance from 82.13% for a state-of-the- art grasp detection network to 85.74% on Jacquard dataset. When the learned model is transferred on a real robot, the proposed method correlating graspabil- ity and grasp regression achieves 92.4% rate compared to 88.1% for the baseline trained without the correlation.

Thomas Chaffre, Julien Moras, Adrien Chan-Hon-Tong et Julien Marzat

Sim-to-Real Transfer with Incremental Environment Complexity for Reinforcement Learning of Depth-based Robot Navigation

Transferring learning-based models to the real world remains one of the hardest problems in model-free control theory. Due to the cost of data collection on a real robot and the limited sample efficiency of Deep Reinforcement Learning algorithms, models are usually trained in a simulator which theoretically provides an infinite amount of data. Despite offering unbounded trial and error runs, the reality gap between simulation and the physical world brings little guarantee about the policy behavior in real operation. Depending on the problem, expensive real fine-tuning and/or a complex domain randomization strategy may be required to produce a relevant policy. In this work, a Soft-Actor Critic (SAC) training strategy using incremental environment complexity is proposed to drastically reduce the need for additional training in the real world. The application addressed is depth-based mapless navigation, where a mobile robot should reach a given waypoint in a cluttered environment with no prior mapping information. Experimental results in simulated and real environments are presented to assess quantitatively the efficiency of the proposed approach, which demonstrated a success rate twice higher than a naive strategy. Video:https://tinyurl.com/Copernic-sim2real-learning

Frédéric BARBARESCO

Apprentissage Machine sur les groupes de Lie pour les systèmes dynamiques rigides et articulés: modèle symplectique de Souriau de la mécanique statistique via les orbites coadjointes

Les groupes de Lie sont d'un usage courant en robotique [19], mais semblent encore peu utilisés en apprentissage machine [14]. Nous présentons un modèle issu de la Mécanique Géométrique, développé par Jean-Marie Souriau dans le cadre de la Mécanique Statistique [1,2,6-8], permettant de définir une métrique invariante de type Fisher et des densités statistiques covariantes sous l'action du groupe. Cette nouvelle approche [3-5, 20] permet d'étendre l'apprentissage machine supervisé et non-supervisé conjointement à des éléments appartenant à un groupe ou à des éléments appartenant à une variété homogène sur laquelle un groupe agit transitivement. D?autres modèles sont à l'étude faisant aussi appel à la théorie des représentations des groupes de Lie [9-11].

Nicolas Duminy et Sao Mai Nguyen

Découverte et exploitation de la hiérarchie des tâches par motivation intrinseque

Les tâches des robots peuvent être de types variés, hiérarchiques, et peuvent subir des changements radicaux ou même être créées après le déploiement du robot. Le robot doit faire face à des tâches avec des complexités variables, certaines demandant une action simple pour être réalisée, d'autre demandant une séquence d'actions. Une méthode permettant de guider ce choix se nomme la motivation intrinsèque. Le robot est guidé vers les zones les plus intéressantes de son environnement afin d'apprendre les compétences les plus intéressantes. Il est capable d'évaluer la complexité de l'action nécessaire pour réaliser une t?ache. Quand il fait face `a des ta?ches hiéarchiques de différrentes complexités, qui peuventétre réalisées par une combinaison de t?aches plus simples, le robot utilise une nouvelle manière d'acquérir des compétences en explorant la hiérarchie des t?aches elle-m?eme, en combinant ses compétences via une combinaison de t?aches afin d'acquérir des nouvelles et plus complexes.

Date : 2020-06-22

Lieu : Visio-conférence


Thèmes scientifiques :
B - Image et Vision
T - Apprentissage pour l'analyse du signal et des images

Inscriptions closes à cette réunion.

Accéder au compte-rendu de cette réunion.

(c) GdR IASIS - CNRS - 2024.