Vous êtes ici : Kiosque » Annonce

Identification

Identifiant: 
Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?

Annonce

8 mars 2022

Graph learning for EEG signals classification


Catégorie : Doctorant


Graphs offer a structured and meaningful way to represent information and links between observed variables. Thus, a fundamental problem is to estimate the graph topology from the data itself, which is referred to as graph learning. From a statistical point of view, this structure can be linked to the support of the precision matrix (inverse covariance matrix), which allows us to consider robust statistics approach in order to perform this task. Once the graph topology is unveiled, one can take advantage of its knowledge in order to perform processing tasks such as data clustering or signal filtering, which will broadly be referred to as graph signal processing (GSP).

In this context, the spectral decomposition (EVD) of the graph Laplacian and/or adjacency matrices of the graph plays a central role, as their eigenvalues naturally encode some topological properties of the graph such as symmetries, or the number of clusters in the data (k-components). Moreover the eigenvectors of the Laplacian matrix represent a Fourier basis of the graph, which is instrumental in GSP (extending notions of frequency, filtering, and sub-sampling to graphs), as well as in graph convolutional networks or graph dimension reduction. Interestingly, structural properties of this basis (such as sparsity or density) can also be directly linked to the graph topology.

This Thesis thus aims at tackling current problems related to graph learning and its applications in a unified way centered on the spectral decomposition of the graph Laplacian and/or adjacency matrices. The central objective of this project is to model graph structures (distributions on spectral parameters) and leverage this formalism into two main directions:

● #1 - Improve graph learning processes by directly learning structured spectral decompositions from the data. This objective calls for the derivation of new efficient learning algorithms that respond to the aforementioned challenges (robustness and structural constraints). In this direction, we will notably rely on tools from robust statistics, majorization-minimization, and Riemannian optimization frameworks.

● #2 - Handle collections of graphs in order to compute structured graphs barycenters, compress graphs representations, and classify/cluster data using their graph as the main feature. In this scope, we aim to apply the proposed algorithms to the problem of EEG signals classification (notably relying on the MOABB benchmarks).

Contacts
Arnaud Breloy, abreloy@parisnanterre.fr, https://abreloy.github.io/
LEME – Université Paris Nanterre, Campus de Ville d'Avray, 50, rue de Sèvres, 92410 Ville d’Avray.
Florent bouchard, florent.bouchard@centralesupelec.fr, https://sites.google.com/view/florentbouchard/home
L2S – CentraleSupélec, bât. Bréguet, 3, rue Joliot Curie, 91190 Gif-sur-Yvette.

References:

[1] Barachant, Bonnet, Congedo, Jutten. “Multiclass brain-computer interface classification by Riemannian geometry," IEEE Transactions on Biomedical Engineering, 59(4):920-928, 2011.
[2] Bouchard, Breloy, Ginolhac, Renaux, Pascal, "A Riemannian framework for low-rank structured elliptical models," IEEE Transactions on Signal Processing, vol. 69, pp. 1185-1199, 2021.
[3] Breloy, Kumar, Sun, Palomar, "Majorization-minimization on the Stiefel manifold with application to robust sparse PCA," in IEEE Transactions on Signal Processing, vol. 69, pp. 1507-1520, 2021.
[4] Kumar, Ying, de Miranda Cardoso, Palomar, "A Unified Framework for Structured Graph Learning via Spectral Constraints," Journal of Machine Learning Research, 21(22), 1-60, 2020.
[5] Jayaram, Barachant. “MOABB: Trustworthy Algorithm Benchmarking for BCIs”. In: Journal of Neural Engineering, 2018.

 

Dans cette rubrique

(c) GdR 720 ISIS - CNRS - 2011-2022.