Vous êtes ici : Réunions » Réunion

Identification

Identifiant: 
Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?

Apprentissage automatique multimodal et fusion d'informations (2ième édition)

Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.

Inscriptions closes à cette réunion.

Inscriptions

124 personnes membres du GdR ISIS, et 63 personnes non membres du GdR, sont inscrits à cette réunion.
Capacité de la salle : 250 personnes.

Annonce

Lien de connexion pour la réunion 

https://zoom.us/j/93445614388?pwd=WDhJcWVZQ3hxMkNxZ0dPWlg0eHFKUT09
ID de réunion : 934 4561 4388
Code secret : 934977

Suite de la journée « Apprentissage automatique multimodal et fusion d'informations » du 27/05/2021, nous organisons la deuxième édition sur cette thématique. Les approches de fusion d'informations sont de plus en plus utilisées dans les applications industrielles et médicales dans lesquelles il existe un réel besoin de prendre en compte plusieurs types d'informations simultanément, même celles d'un expert. Les systèmes de fusion deviennent complexes car ils impliquent toutes les étapes de la chaîne de traitement de l'information (de l'extraction à la décision). Ils ont de nombreux paramètres et impliquent un temps de calcul important. Ils ne sont pas non plus faciles à utiliser et à ajuster par les utilisateurs finaux. L'objectif de cette journée est de réunir des chercheurs afin de présenter et de discuter des développements récents dans la conception de systèmes de fusion d'informations, y compris mais non limité à ces sujets :

Les applications industrielles et médicales sont de plus en plus demandeuses de ce type de système et les experts veulent une approche coopérative dans laquelle ils ont confiance.

Appel à communications

Le programme inclura des communications pour lesquelles un appel à contributions est lancé. Si vous souhaitez présenter vos travaux, merci d'envoyer vos propositions le 5 janvier 2022 au plus tard (titre, auteurs, affiliation, un résumé de 5-10 lignes) aux organisateurs :

Deux orateurs invités

Programme

9h : Théorie des fonctions de croyance et apprentissage automatique
Thierry Denoeux, Heudiasyc, UTC

10h : Deep PET/CT fusion with Dempster-Shafer theory for lymphoma segmentation
Ling Huang, Su Ruan, Thierry Denoeux, Labo Heudiasyc UTC

10h20 : Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation
Damien Robert, Bruno Vallet, Loïc Landrieu, CSAI Lab et LASTIG.

10h40 : Representation learning and fusion of multi-modal data for the statistical analysis of medical imaging populations
Nicolas Duchateau, Labo CREATIS, Lyon

11h40 : Hyperspectral super-resolution accounting for spectral variability: coupled tensor LL1-based recovery and blind unmixing of the unknown super-resolution image
Clémence Prévost, Ricardo A. Borsoi, Konstantin Usevich, David Brie, José C. M. Bermudez, Cédric Richard.

12h00 : An AO-ADMM approach to constraining PARAFAC2 on all modes
Marie Roald, Carla Schenker, Rasmus Bro, J. E. Cohen, and Evrim Acar, SimulaMet, Oslo

12h20 : Fusion of heterogeneous feature maps extracted by an ensemble of convolutional networks for an implementation on embedded systems
Guillaume Heller (1,2) , Eric Perrin (1) , Valeriu Vrabie (1) , Cédric Dusart (2) , Solen Le Roux (2)

Résumés des contributions

Théorie des fonctions de croyance et apprentissage automatique

Thierry Denoeux
Université de technologie de Compiègne, CNRS, Heudiasyc
Institut universitaire de France

La théorie de Dempster-Shafer est basée sur la modélisation d'informations élémentaires par des fonctions de croyance et sur leur combinaison par différents opérateurs choisis en fonction d'hypothèses sur les sources. Elle est donc particulièrement adaptée à la fusion d'informations. En apprentissage, la théorie des fonctions de croyance permet de modéliser l'incertitudes sur les données (apprentissage partiellement supervisé) et sur les prédictions (classifieurs « évidentiels »). Le modèle des réseaux de neurones évidentiels, basé sur la distance à des prototypes, a récemment été étendu à l'apprentissage profond. Par ailleurs, des résultats récents permettent d'interpréter les traitements effectués dans la couche de sortie « softmax » d'un réseau de neurones comme l'addition de poids d'évidence, un des concepts clés de la théorie des fonctions de croyance. Ce point de vue permet d'envisager de nouveaux algorithmes d'apprentissage et de nouvelles approches pour combiner les sorties de plusieurs classifieurs. Dans cet exposé, je discuterai ces résultats récents et je tenterai de dégager quelques perspectives.

Deep PET/CT fusion with Dempster-Shafer theory for lymphoma segmentation

Ling Huang (1,3), Thierry Denoeux (1,2), and Su Ruan (3)
1 : Université de technologie de Compiègne, CNRS, Heudiasyc, Compiègne, France
2 : Institut universitaire de France, Paris, France
3 : LITIS, University of Rouen Normandy, Rouen, France

Designing automatic segmentation methods capable of effectively exploiting the information from PET and CT as well as fusing multiple source information remains a challenge. In this presentation, we show an application of Dempster-Shafer theory (DST) in the 3D lymphoma segmentation task, which focuses on shows how DST can be used to fuse feature-level evidence as well as quantify information uncertainty. An automatic evidential segmentation method based on DST and deep learning are proposed. The architecture is composed of a deep feature extraction module and an evidential segmentation (ES) layer. The feature extraction module uses an encoder-decoder framework to extract semantic feature vectors from 3D inputs. The ES layer first maps deep features into mass functions and fuse output segmentation results and segmentation uncertainty. We hope this presentation could encourage more explorations of DST in the medical domain.

Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation

Damien Robert (1,2), Bruno Vallet (2), Loïc Landrieu (2)
1 : CSAI Lab, ENGIE CRIGEN, Stains
2: LASTIG, Univ. Gustave Eiffel, IGN-ENSG

Recent works on 3D semantic segmentation propose to exploit the synergy between images and point clouds by processing each modality with a dedicated network and projecting learned 2D features onto 3D points. Merging large-scale point clouds and images raises several challenges, such as constructing a mapping between points and pixels and aggregating features between multiple views. Current methods rely on mesh reconstruction or specialized sensors to recover occlusions, and use heuristics to select and aggregate images. In contrast, we propose an end-to-end trainable multi-view aggregation model leveraging the viewing conditions of 3D points to merge features from images taken at arbitrary positions. Our method can combine standard 2D and 3D networks and outperforms both 3D models operating on colorized point clouds and hybrid 2D/3D networks without requiring colorization, meshing, or true depth maps.

Hyperspectral super-resolution accounting for spectral variability: coupled tensor LL1-based recovery and blind unmixing of the unknown super-resolution image

Clémence Prévost(1), Ricardo A. Borsoi (3,4), Konstantin Usevich (2), David Briey (2), José C. M. Bermudezz (3), Cédric Richard (4)
1 : Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France.
2 :Centre de Recherche en Automatique de Nancy (CRAN), Université de Lorraine, CNRS, Vandoeuvre-lés-Nancy, France.
3 : Department of Electrical Engineering, Federal University of Santa Catarina (DEE{UFSC), Florianôpolis, SC, Brazil.
4 : Université Côte d'Azur, Nice, France, Lagrange Laboratory (CNRS, OCA).

Hyperspectral devices sample the electromagnetic spectrum into hundred of wavelengths; however, due the sensors' limitations, they produce hyperspectral images (HSI) with low spatial resolution. On the other hand, multispectral images (MSI) have high spatial resolution, but a small number of spectral bands. Hyperspectral super-resolution (HSR) aims at recovering a super-resolution image (SRI) with both high spatial and high spectral resolutions from an HSI and an MSI. It can then be exploited in unmixing tasks.

We propose to jointly solve the HSR problem and the unmixing problem of the underlying super-resolution image, using the tensor LL1 block-term decomposition. We consider a spectral variability phenomenon occurring between the observations. Exact recovery conditions for the SRI and its non-negative LL1 factors are provided. We propose two algorithms: an unconstrained one and another one subject to non-negativity constraints, to solve the problems at hand. In this presentation, we will showcase performance of our approach for the fusion and unmixing parts of the problem, using a set of real images.

Representation learning and fusion of multi-modal data for the statistical analysis of medical imaging populations

Nicolas Duchateau, Labo CREATIS, Lyon

In medical imaging, the data descriptors can consist of the original images, or more elaborated characteristics of an organ such as shape, deformation along time, etc. Most of them are high-dimensional and originate from a non-linear space that should be considered for the statistical analysis of a whole population. Several techniques from representation learning such as manifold learning or variational auto-encoders allow estimating a latent space that encodes the input data and is statistically relevant to compare individuals or subgroups. However, developing computer-aided diagnosis and prognosis systems in the medical imaging context requires handling multiple high-dimensional and heterogeneous data descriptors, and their potential interactions. In this talk, I will provide an overview of some representation learning techniques relevant for the fusion of multi-modal data, aiming at the statistical analysis of medical imaging populations, but keeping in mind the clinical problem to address and that the estimated data representations need to be interpreted and trusted by medical doctors.

An AO-ADMM approach to constraining PARAFAC2 on all modes

Marie Roald (SimulaMet, Oslo), Carla Schenker (SimulaMet, Oslo), Rasmus Bro (University of Copenhagen), J. E. Cohen (CREATIS, Lyon), and Evrim Acar (SimulaMet, Oslo)

Analyzing multi-way measurements with variations across one mode of the dataset is a challenge in various fields including data mining, neuroscience and chemometrics. For example, measurements may evolve over time or have unaligned time profiles. The PARAFAC2 model has been successful lly used to analyze such data by allowing the underlying factor matrices in one mode (i.e., the evolving mode) to change across slices. The traditional approach to fit a PARAFAC2 model is to use an alternating least squares-based algorithm, which handles the constant cross-product constraint of the PARAFAC2 model by implicitly estimating the evolving factor matrices. This approach makes imposing regularization on these factor matrices challenging. There is currently no algorithm to flexibly impose such regularization with general penalty functions and hard constraints. In order to address this challenge and to avoid the implicit estimation, in this paper, we propose an algorithm for fitting PARAFAC2 based on alternating optimization with the alternating direction method of multipliers (AO-ADMM).

With numerical experiments on simulated data, we show that the proposed PARAFAC2 AO-ADMM approach allows for flexible constraints, recovers the underlying patterns accurately, and is computationally efficient compared to the state-of-the-art. We also apply our model to a real-world chromatography dataset, and show that constraining the evolving mode improves the interpretability of the extracted patterns.

Ce travail est soumis au journal SIAM Journal on Mathematics of Data Sciences (SIMODS) depuis début octobre 2021, un preprint est disponible sur arxiv: https://arxiv.org/abs/2110.01278.

Fusion of heterogeneous feature maps extracted by an ensemble of convolutional networks for an implementation on embedded systems

Guillaume Heller (1,2) , Eric Perrin (1) , Valeriu Vrabie (1) , Cédric Dusart (2) , Solen Le Roux (2)
(1) Université de Reims Champagne Ardenne, CReSTIC EA 3804, 51097 Reims, France
(2) Segula

The use of an ensemble of models to solve different problems has become increasingly popular. However, these solutions require significant resources, which can be problematic at the inference stage. We propose a solution that exploits the information extracted by the first layers of different models, with different architectures or from different representations of the inputs. The features maps are combined through transformation blocks, capable of handling differences in the size and number of feature maps, and re-injected into a compact network. We thus benefit from the advantages of ensemble learning methods while adding only a few layers to a network dedicated to the inference. Experimental results show that the performance of the classic compact architecture is significantly exceeded without increasing the inference time too much.

Date : 2022-01-19

Lieu : Visioconférence (avec ZOOM)


Thèmes scientifiques :
B - Image et Vision

Inscriptions closes à cette réunion.

Accéder au compte-rendu de cette réunion.

(c) GdR IASIS - CNRS - 2024.