Vous êtes ici : Réunions » Réunion


Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?

Décompositions tensorielles

Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.

Inscriptions closes à cette réunion.


62 personnes membres du GdR ISIS, et 26 personnes non membres du GdR, sont inscrits à cette réunion.
Capacité de la salle : 95 personnes.


Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.

Annonce :

Dans le contexte actuel, la réunion aura lieu en visioconférence. Cependant pour des raisons techniques liées au nombre de connexions simultanées, l'inscription à la réunion est gratuite mais obligatoire. Les identifiants de connexion seront communiqués par mail aux inscrits la veille ou le matin de la réunion.

Décompositions tensorielles

Les décompositions tensorielles sont des outils parfaitement adaptés à la représentation et à l'analyse des données multidimensionnelles. Elles trouvent de nombreuses applications en traitement du signal et de l'image ainsi qu'en apprentissage statistique et offrent un cadre théorique pertinent pour développer des techniques génériques de traitement de données en grande dimension, assorties de garanties théoriques et fournissants des résultats aisément interprétables.

L'objectif de cette journée est de présenter des avancés récentes relatives au développement de méthodes tensorielles pour l'apprentissage, la fusion de données, l'approximation de fonctions non-linéaires, etc.

Appel à contribution

Les personnes souhaitant présenter leurs travaux sont invitées à faire part de leur intention aux deux organisateurs avant le 15 mai 2020. La priorité sera donnée aux travaux de jeunes chercheurs.



14h00-15h00 : Nikos Sidiropoulos, Louis T. Rader Professor and Chair, Dept. of ECE, University of Virginia

Canonical Identification: A Principled Alternative to Neural Networks

15h00-15h50 : Konstantin Usevich, CNRS, Université de Lorraine, CRAN

Tensor rank, X-rank, and factorization of multivariate functions

15h50-16h20 : Christo Kurisummoottil Thomas, PhD Student (Supervisor: Prof. Dirk Slock), Eurecom, Sophia Antipolis, France.

Approximate inference based static and dynamic sparse Bayesian learning with Kronecker structured dictionnaries

16h20-16h40 : Abdelhak Boudehane, CentraleSupelec

Divide-And-Conquer Strategies for High-OrderTensor/Matrix Factorization

Résumés des contributions

Canonical Identification: A Principled Alternative to Neural Networks

Nikos Sidiropoulos, Louis T. Rader Professor and Chair, Dept. of ECE, University of Virginia

Abstract: Deep neural networks are currently the most popular method for learning to mimic the input-output relationship of a generic nonlinear system, as they have proven to be very effective in approximating complex, highly nonlinear functions. However, we still don't understand why neural networks work as well as they often do, choosing the right architecture is often an art, and, at the end of the day, the results are hard to interpret. All this goes against the very foundation of our education: to think and design starting from basic principles. In this talk, I will discuss a principled alternative to neural networks -- one that is readily understood by engineers, and a posteriori may even seem profound. This is based on tensor principal components -- or, more precisely, low-rank tensor approximation, but used in a very different way, to model a nonlinear function of any nonlinearity order. The approach has many advantages: it is `universal', intuitive, interpretable, comes with theoretical guarantees, and it even works with incomplete input data. The effectiveness of the approach is illustrated using standard benchmark datasets, and a challenging student grade prediction task.

Bio: Nikos Sidiropoulos earned his Ph.D. in Electrical Engineering from the University of Maryland'College Park, in 1992. He has served on the faculty of the University of Virginia, University of Minnesota, and the Technical University of Crete, Greece, prior to his current appointment as Chair of ECE at UVA. His research interests are in signal processing, communications, optimization, tensor decomposition, and factor analysis, with applications in machine learning and communications. He received the NSF/CAREER award in 1998, the IEEE Signal Processing Society (SPS) Best Paper Award in 2001, 2007, and 2011, served as IEEE SPS Distinguished Lecturer (2008-2009), and as Vice President - Membership of IEEE SPS (2017-2019). He received the 2010 IEEE Signal Processing Society Meritorious Service Award, and the 2013 Distinguished Alumni Award from the University of Maryland, Dept. of ECE. He is a Fellow of IEEE (2009) and a Fellow of EURASIP (2014).

Tensor rank, X-rank, and factorization of multivariate functions

Konstantin Usevich, CNRS, Université de Lorraine, CRAN

Abstract: This talk will start from a gentle introduction to the tensor rank and the tensor rank decomposition (also known as the canonical polyadic decomposition, or CPD). Unlike matrices, higher-order tensors have several remarkable properties: uniqueness of the CPD, absence of the best low-rank approximation, difference between maximal and typical ranks. These properties also hold true for the so-called X-rank, which includes a number of generalizations of the tensor rank. As an application, a special factorization of multivariate functions into a neural-network-type form (also called decoupling in the literature) will be reviewed in the second part of the talk.
In particular, it will be shown how the CPD can be used for the factorization problem, and how the uniqueness properties of the factorizations can be studied in the framework of X-rank.

Bio: Konstantin Usevich received his Ph.D. degree in 2011 from St. Petersburg State University, Russia. From 2011 to 2017, he has been a postdoctoral researcher at University of Southampton (UK), Vrije Universiteit Brussel (Belgium) and GIPSA-lab (Grenoble, France). He is a permanent CNRS researcher at CRAN (Nancy) since 2017. His research interests are in linear and multilinear algebra, optimization, and applications in signal processing and machine learning. He is on the editorial board of SIAM Journal on Matrix and Analysis since 2018. He was awarded an ANR JCJC grant in 2019.

Approximate inference based static and dynamic sparse Bayesian learning with Kronecker structured dictionnaries

Christo Kurisummoottil Thomas, PhD Student (Supervisor: Prof. Dirk Slock), Eurecom, Sophia Antipolis, France.

Abstract: In many applications such as massive multi-input multi-output (MIMO) radar, massive MIMO channel estimation, speech processing, image and video processing, the received signals are tensors. In such applications, utilizing tensor decomposition techniques such as Canonical Polyadic Decomposition (CPD) or Tucker Decomposition (TD) can be beneficial since they retain the tensorial structure in the received signal compared to processing its unstructured matrix version of the same signal. Specifically, we consider the TD wherein the core tensor (which is non-diagonal) is considered to be time varying and sparse. In this talk, we propose approximate Bayesian inference techniques which allow handling the extension of sparse Bayesian learning (SBL) to time-varying states which are observed through Kronecker structured measurement matrices. We consider here Kronecker product structure for the dictionary, but non-structured Kronecker factors. However, some factors could have a parsimonious further parameterization with Vandermonde structure etc. Augmenting the states with parameters of the autoregressive process which is used to model the temporal correlation leads to a non-linear (at least bilinear) state-space model. In short, this talk focuses on a joint sparse state vector and dictionary learning (DL) problem, where the underlying dictionary matrix is Kronecker structured. The original SBL does not scale with the data dimensions due to the computational complexity associated with the matrix inversion operation. Hence, it is of interest to consider approximate Bayesian inference techniques which are derived from Variational Free Energy framework. Belief propagation (BP) is a promising method to compute the minimum mean squared error (MMSE) or maximum a posteriori (MAP) Estimates. In this work, we propose BP based techniques for sparse signal estimation and DL, also numerically evaluate our results compared to state of the art techniques such as alternating least squares or our previous work on mean field (Variational Bayesian inference with posterior factorization assumed at scalar level for the Kronecker factor matrices) based DL. Another point worth noting about our Bayesian approach is that the sparsity measure (number of nonzero components) of the core tensor is unknown and our novel BP based DL algorithm performs automatic tensor rank determination.

Divide-And-Conquer Strategies for High-OrderTensor/Matrix Factorization

Abdelhak Boudehane, CentraleSupelec (abdelhak.boudehane@l2s.centralesupelec.fr), Yassine Zniyed, CRAN, (yassine.zniyed@univ-lorraine.fr), Laurent Le Brusquet, CentraleSupelec (laurent.lebrusquet@l2s.centralesupelec.fr), Arthur Tenehaus, CentraleSupelec (arthur.tenenhaus@l2s.centralesupelec.fr), Remy Boyer, University of Lille, (remy.boyer@univ-lille.fr)

Abstract: Heterogeneous data sets in different application fields, such as signal processing [1] and neuroscience [2], are structured into either matrices or higher-order tensors. These structures, in some cases, present the property of having common underlying factors. This particular property is used to improve the efficiency of factor-matrices estimation in the process of the so-called coupled matrix-tensor factorization (CMTF). Many methods that exist in literature, target the CMTF problem relying on alternating algorithms [3] or gradient approach [4]. However, computational complexity remains a challenge, in particular when the data sets can be modelized by tensors of high order which can be relied to the well-known ?curse of dimension? [5]. In this scenario, the number of elements in a tensor increases exponentially with the number of dimensions, and so do the computational and memory requirements. The curse limits the order of the tensors that can be handled [6]. In this work, we present a methodological approach following a Divide-And-Conquer strategy, using the Joint dImensionality Reduction And Factors rEtrieval (JIRAFE) algorithm [7] for joint factorization of high-order tensor and matrix. This approach, based on algebraic properties of the so-called tensor-train model [8], in the case of high-order tensor coupled with a matrix, helps reducing the complex high-order CMTF (HO-CMTF) problem into a simple CMTF and low-order canonical polyadic decomposition (CPD) problems. Likewise, we present the simulation results of this approach that we compare with a gradient-based method for the same efficiency. Finally, we discuss the results in the terms of performance evolution in function of the tensor?s order in the frame of the CMTF.


[1] A. Cichocki, D. P. Mandic, A. H. Phan, C. F. Caiafa, G. Zhou, Q. Zhao, and L. De Lathauwer. ?Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis?. In: CoRR abs/1403.4462 (2014).

[2] F. Cong, Q. H. Lin, L. D. Kuang, X. F. Gong, P. Astikainen, and T. Ristaniemi. ?Tensor decomposition of EEG signals: A brief review?. In: Journal of Neuroscience Methods 248 (2015), pp. 59?69.

[3] S. Bahargam and E. E. Papalexakis. ?A Constrained Coupled Matrix-Tensor Factorization for Learning Time-evolving and Emerging Topics?. In: CoRR abs/1807.00122 (2018).

[4] E. Acar, T. G. Kolda, and D. M. Dunlavy. ?All-at-once Optimization for Coupled Matrix and Tensor Factorizations?. In: arXiv e-prints (2011), arXiv:1105.3422.

[5] A. Cichocki. ?Era of Big Data Processing: A New Approach via Tensor Networks and Tensor Decompositions?. In: arXiv e-prints (2014), arXiv:1403.2048.

[6] N. Vervliet, O. Debals, L. Sorber, and L. De Lathauwer. ?Breaking the Curse of Dimensionality Using Decompositions of Incomplete Tensors: Tensor-based scientific computing in big data analysis?. In: IEEE Signal Processing Magazine 31.5 (2014), pp. 71?79.

[7] Y. Zniyed, R. Boyer, A. L. F. de Almeida, and G. Favier. High-order tensor estimation via trains of coupled third-order CP and Tucker decompositions, Linear Algebra and its Application (LAA), Vol. 588, March 2020, pp. 304-337

[8] I. Oseledets. ?Tensor-Train Decomposition?. In: SIAM Journal on Scientific Computing 33.5 (2011), pp. 2295?2317.

Date : 2020-05-27

Lieu : Visioconférence

Thèmes scientifiques :
A - Méthodes et modèles en traitement de signal

Inscriptions closes à cette réunion.

(c) GdR IASIS - CNRS - 2024.