Vous êtes ici : Réunions » Réunion

Identification

Identifiant: 
Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?

Co-conception : capteurs hybrides et algorithmes pour des systèmes innovants

Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.

Inscriptions closes à cette réunion.

Inscriptions

38 personnes membres du GdR ISIS, et 24 personnes non membres du GdR, sont inscrits à cette réunion.
Capacité de la salle : 0 personnes.

Annonce

Attention: L'inscription est gratuite, mais obligatoire

La participation par visioconférence est possible pour les sessions orales, mais pas pour les sessions poster. Un lien zoom sera transmis aux inscrits le 19 novembre par courriel.

Descriptif

La conception de systèmes d'acquisition d'images a connu un renouveau grâce aux approches "co-conçues" pour lesquelles le dispositif d'imagerie, de détection ou de mesure est étroitement associé aux algorithmes employés pour traiter les données.

Dans de nombreux domaines, tels que la photographie numérique, la microscopie, la télédétection, l'astronomie ou l'imagerie radar, de nouveaux dispositifs d'acquisition sont développés pour dépasser les performances des systèmes traditionnels, en termes de qualité image, d'encombrement, de poids, de consommation d'énergie ou pour ajouter de nouvelles fonctionnalités aux caméras et instruments d'acquisition d'images et de vidéos. La conception de ces nouveaux instruments repose souvent sur une approche pluridisciplinaire pour modéliser et optimiser simultanément les paramètres de l'instrument et les traitements numériques en tenant compte des contraintes de l'application visée. La conception conjointe conduit à développer de nouveaux instruments dits "non conventionnels" ou "hybrides", pour lesquels l'instrument et les traitements numériques sont indissociables.

Cette journée est ouverte à plusieurs thématiques, qui comprennent entre autres :

1. les nouvelles modalités d'imagerie non-conventionnelle, par exemple :

2. des nouveaux algorithmes supervisés, non supervisés et auto-supervisés associés à ces instruments :

3. la co-conception système - traitement, notamment :

L'objectif de cette rencontre est de favoriser les échanges entre tous les acteurs (industriels, académiques) de toutes les disciplines intéressées, notamment les mathématiques appliquées, le traitement numérique d'images, l'optique, et l'instrumentation. Nous souhaitons favoriser les échanges sur des nouvelles approches pluridisciplinaires au travers :

Orateurs invités :

Yoav Schechtman, Associated professor, Technion - Israel Institute of Technology, Israël

Michael McCann, Scientist at the Applied Mathematics and Plasma Physics Group (T-5), Los Alamos National Laboratory, New-Mexico, USA

Simon Labouesse, LITC Core Facility, Centre de Biologie Integrative, Université de Toulouse, CNRS, UPS, Toulouse, France

Partenaires :

Cette journée est organisée avec le soutien des GdR MIA, ISIS et Ondes.

Les organisateurs :

Programme

Téléchargez le programme au format pdf

9h00 - 9h15 : Accueil et Introduction

9h15 - 10h45 : Microscopie1 -- Microscopy 1

9h15 - 9h45 : Présentation invitée -- Pseudo Random Illumination and Super Resolution Microscopy under Biological Sample Constraint

Simon Labouesse -- LITC Core Facility, Centre de Biologie Integrative, Université de Toulouse, CNRS, UPS, Toulouse

9h45 - 10h05 : Sparse denoising and adaptive estimation enhances the resolution and contrast of fluorescence emission difference microscopy based on array detector

Charles Kervrann -- Inria, CNRS, Inserm, Institut Curie

10h05 - 10h25 : Compressive Raman microspectroscopy

Hilton de Aguilar -- Laboratoire Kastler Brossel

10h25 - 10h45 : Measure of tomographic incompleteness and applications

Matthieu Larendeau -- Creatis Lyon, Université Grenoble Alpes, Thales Moirans

11h - 11h15 : Pause

11h15 - 12h35 : Apprentissage auto-supervisé -- Self-Supervised learning

11h15 - 11h45 : Tutorial -- Remote Sensing Applications of Self-supervised Video Restoration

Gabriele Facciolo -- Centre Borelli, ENS Paris-Saclay

11h45 - 12h05 : A theoretical framework for frame-to-frame self-supervised multi-frame restoration

Pablo Arias Martínez -- Universitat Pompeu Fabra

12h05 - 12h35 : Tutorial -- Learning to image without ground-truth data

Julian Tachella -- CNRS, ENSL

12h35 - 14h00 : Déjeuner & Session Posters -- Lunch & Poster Session

14h00 - 15h00 : Modélisation optique et co-conception -- Optical model and co-design

14h00 - 14h20 : SIMCA : a simulator for Coded-Aperture Spectral Snapshot Imaging (CASSI)

Antoine Rouxel -- LAAS, CNRS

14h20 - 14h40 : Nanocarb : an optically matched filter for atmosphering sounding

Yann Ferrec -- ONERA, DOTA, Palaiseau

14h40 - 15h00 : A Physics-Inspired Deep Learning Framework for an Efficient Fourier Ptychographic Microscopy Reconstruction under Low Overlap Conditions

Lyes Bouchama -- TRIBVN/T-Life et Télécom Sudparis

15h00 - 16h05 : Modèles de diffusion -- Diffusion models

15h00 - 15h45 : Présentation invitée -- Learning-Based Approaches to Inverse Problems in Imaging

Michael Mc Cann -- Los Alamos National Laboratory, NM

15h45 - 16h05 : Fast Diffusion EM: a diffusion model for blind inverse problems with application to deconvolution

Charles Laroche -- GoPro/MAP5

16h05 - 16h30 : Pause

16h30 - 17h35 : Microscopie 2 -- Microscopy 2

16h30 - 16h50 : Optical Diffraction Tomography Meets Fluorescence Localization Microscopy.

Emmanuel Soubies -- IRIT, CNRS

16h50 - 17h35 : Présentation invitée -- Wavefront shaping for microscopy - or - how and why to ruin a perfectly good microscope

Yoav Schechtman -- Technion, Israel Institute of Technology

Posters

Point spread function wavefront recovery: phase retrieval with automatic differentiation

Tobías I. Liaudat -- Department of Computer Science, University College London (UCL), London, UK.

Modèle photométrique RVB fin pour la co-conception optique/réseau de neurones

Marius Dufraisse -- DTIS, ONERA - Université Paris-Saclay, F-91123, Palaiseau, France

Spectro-spatial hyperspectral image reconstruction from interferometric acquisitions

Daniele PICONE -- GIPSA-lab (Grenoble INP)

Reconstruction of Spectra from Interferometric Measurements

Mohamad JOUNI -- Grenoble INP, Université Grenoble Alpes

Impact of training data on LMMSE demosaicing for Colour-Polarization Filter Array

Dumoulin RONAN -- Université de Haute-Alsace, IRIMAS EA 7499

Variational autoencoders for domain shift. Application to air low-cost sensors

Aymane SOUANI -- IBISC EA 4526, équipe SIAM

Annotation-free quality Score for segmentation and tracking in 3D Live Fluorescence Microscopy

Philippe Roudot -- Institut Fresnel, Marseille

Résumés des contributions

Présentations invitées

Pseudo Random Illumination and Super Resolution Microscopy under Biological Sample Constraint

Simon Labouesse, LITC Core Facility, Centre de Biologie Integrative, Université de Toulouse, CNRS, UPS, Toulouse

Abstract:Fluorescence microscopy can be modeled as the convolution of a point spread function PSF with the product of the sample with an illumination. Known structured illumination technics enable to estimate the frequency content of the object on the Fourier support Dpsf - Dillu from multiple low-resolution images. Dpsf and Dillu are the Fourier support of the PSF and of the illuminations respectively, operator - is a Minkowski difference. In these technics, illuminations need to be precisely controlled which might be difficult when the sample acts as an uncontrolled optical element causing aberrations.

In the case of i.i.d speckle illuminations, the covariance is robust to phase aberrations and can be easily derived. Under mild assumptions, we have shown that the sample frequency content can be identified on the domain Dillu -Dillu from the knowledge of the covariance of the measurement which only require the knowledge of the covariance of the illumination to be calculated [2]. If the Stokes shift is neglected, the resolution capacity for known and unknown illumination is then the same. We will present an estimator of the sample with an algorithmic complexity (N log(N)) with N the number of pixels. This estimator only require the prior knowledge of the covariance of the illuminations.

In a second part, we will motivate the use of pseudo random illuminations to reduce the number of required measurements, while preserving the robustness to phase aberrations. Finally, we will illustrate the performance of variance-based RIM in live biological conditions [3].

[1] Mudry, Emeric, et al. "Structured illumination microscopy using unknown speckle patterns." Nature Photonics 6.5 (2012): 312-315.

[2] Idier, Jérôme, et al. "On the superresolution capacity of imagers using unknown speckle illuminations." IEEE Transactions on Computational Imaging 4.1 (2017): 87-98.

[3] Mangeat, Thomas, et al. "Super-resolved live-cell imaging using random illumination microscopy." Cell reports methods 1.1 (2021): 100009.

Learning-Based Approaches to Inverse Problems in Imaging

Michael Mc Cann, Los Alamos National Laboratory, NM

Abstract: Digital processing is increasingly integral to scientific imaging systems, impacting their speed, accuracy, and reliability. In some systems, for example the Event Horizon Telescope which captured the image of the black hole M87*, the measurements collected relate to the object being imaged in a complex way, and an inverse problem must be solved to reconstruct an image from those measurements. In recent years, machine learning-based approaches to inverse problems in imaging have shown great promise in reconstructing high-quality images from fewer, noisier measurements than previously possible. In this talk, I will discuss several strands of this research, including how deep generative models can enable empirical Bayesian reconstructions and uncertainty quantification.

Wavefront shaping for microscopy - or - how and why to ruin a perfectly good microscope

Yoav Schechtman, Technion, Israel Institute of Technology

Abstract: The point spread function (PSF) of an imaging system is the system's response to a point source. To encode additional information in microscopy images, we employ PSF engineering ? namely, a physical modification of the standard PSF of the microscope by additional optical elements that perform wavefront shaping. In this talk I will describe how this method enables unprecedented capabilities in localization microscopy; specific applications include dense fluorescent molecule fitting for 3D super-resolution microscopy, multicolor imaging from grayscale data, volumetric multi-particle tracking/imaging, dynamic surface profiling, and high-throughput in-flow colocalization in live cells. Recent results on additive-manufacturing of highly precise optics will be discussed as well.

Tutoriels

Remote Sensing Applications of Self-supervised Video Restoration

Gabriele Facciolo, Ngoc Long Nguyen, Valery Dewil, Jeremy Anger, Axel Davy, Thibaud Ehret, Pablo Arias, Jean-Michel Morel

Centre Borelli, ENS Paris-Saclay

Abstract: Nowadays, deep-learning techniques represent the state of the art in image restoration. The reason for this success is that data-driven methods can incorporate realistic image priors leading to improved restoration. However, these methods are data-hungry and they heavily rely on the size and quality of the training dataset. The importance of training with realistic data was highlighted in several works where it was shown that models trained on synthetic data generalized poorly to real images. Obtaining realistic datasets of noisy/clean images or videos for supervised training can be a challenging task in many application scenarios. The recently proposed technique of noise-to-noise showed that it is possible to train a denoising network in a self-supervised way without needing a noisy/clean image dataset. In this talk I will present how we extended this concept to the training of networks for video denoising, demosaicking and multi-image super-resolution by exploiting the temporal redundancy in videos or image bursts. In these works, the network is trained to predict a frame of a noisy sequence using its neighboring frames, eliminating the need for ground truth.

Learning to image without ground-truth data

Julian Tachella, Mike Davies, Dongdong Chen, Laurent Jacques

CNRS, ENS Lyon

Abstract: Most computational imaging algorithms rely either on hand-crafted prior models (total variation, wavelets) or on supervised learning (deep neural networks) with a ground truth dataset of references. The first approach generally obtains suboptimal reconstructions, whereas the latter is impractical in many scientific and medical imaging applications, where ground-truth data is expensive or even impossible to obtain. In this talk, I will present recent algorithmic and theoretical advances in unsupervised learning for imaging inverse problems that overcome these limitations, by learning from noisy and incomplete measurement data alone. I will show how weak prior knowledge about the reconstructed image distribution, such as invariance to groups of transformations (rotations, translations, etc.) and low-dimensionality, play a key role in learning from measurement data alone.

Présentations

Sparse denoising and adaptive estimation enhances the resolution and contrast of fluorescence emission difference microscopy based on array detector

Sylvain Pringent 1,2 , Stéphanie Dutertre 3 , Aurélien Bidaud-Meynard4 , Giulia Bertolin 4 , Grégoire Michaux 4 , and Charles Kervrann 1,2

(1) SERPICO Project-Team, Centre Inria de l?Université de Rennes, F-35042 Rennes Cedex, France

(2) UMR 144, CNRS, Institut Curie, PSL Research University, Sorbonne Universités, F-75005 Paris, France

(3) Univ Rennes, UMS Biosit, MRic, F-35000 Rennes, France

(4) Univ Rennes, CNRS, IGDR (Institute of Genetics and Development of Rennes), UMR 6290, F-35000 Rennes, France

Abstract: Array detector allows a resolution gain for confocal microscopy by combining images sensed by a set of photomultipliers tubes (or sub-detectors). Several methods have been proposed to reconstruct a high-resolution image by linearly combining sub-detector images, especially

the fluorescence emission difference (FED) technique. To improve the resolution and contrast of FED microscopy based on array detector, we propose to associate sparse denoising with spatial adaptive estimation. We show on both calibration slides and real data that our approach applied to the full stack of spatially reassigned detector signals, enables to achieve a higher reconstruction performance in terms of resolution, image contrast, and noise reduction.

Published in Optics Letters, 2023, 48 (2), pp.1-11. ?10.1364/OL.474883?, https://inria.hal.science/hal-03931575/

Compressive Raman microspectroscopy

Hilton de Aguilar ? Laboratoire Kastler Brossel

Abstract: Raman imaging is recognized as a powerful label-free approach to provide contrasts based on chemical selectivity. Nevertheless, Raman-based microspectroscopy still have several drawbacks related to its 3D hyperspectral data format and throughput that will soon be bottlenecks when pushing this technology to clinical biomedical and industrial applications scenarios.

In this contribution, I will introduce the concept of compressive Raman imaging: by exploiting the sparsity [1] and redundancy [2] in Raman data sets, one can considerably simplify and speed up the spectral image acquisition, nowadays reaching high-speed imaging [3]. I will discuss the different ways of performing compressive Raman, in particular focusing on challenges for bio-imaging, and how we recently tackled them.

[1] Sturm et al, ACS Photon., in print (2019); Scotte et al. Anal. Chem. 90, 7197 (2018).

[2] Soldevila et al, Optica 6, 341 (2019).

[3] Gentner et al, arXiv:2301.07709 (2023).

Measure of tomographic incompleteness and applications

Matthieu Larendeau 1,2,3, Simon Rit 1, Laurent Desbat 2, Sébastien Georges 3, Frédéric Jolivet 3, Guillaume Bernard 3

1 - Creatis Lyon

2 - Université Grenoble Alpes

3 - Thales Moirans

Abstract: The new generation of X-ray sources based on carbon nanotubes enables the design of multi-source computed tomography scanners. Such scanners often use a limited number of stationary sources and projections. In tomography, the theory gives the necessary condition for the stable three dimensional (3D) reconstruction of an object, scanned from a continuous source trajectory and without truncation. We define a realistic tomographic incompleteness metric considering a limited number of sources and the limited size of the detectors from which we compute the 3D map predicting the reconstruction quality of a given scanner architecture. We illustrate this metric with a dedicated phantom in which the reconstructed images match the results predicted by the realistic tomographic incompleteness map. Finally, we exploit the spatially varying incompleteness in a regularized iterative reconstruction algorithm and demonstrate improved image quality results with this adapted regularization.

A theoretical framework for frame-to-frame self-supervised multi-frame restoration

Pablo Arias Martínez1, Ngoc-Long Nguyen2, Thibaud Ehret2, Valéry Dewil2, Jérémy Anger2, Axel Davy2, Gabriele Facciolo2

1 - Universitat Pompeu Fabra

2 - Centre Borelli, ENS Paris Saclay

Abstract: In recent years a series of methods have been proposed to train multi-frame restoration networks using only degraded data, i.e. without requiring ground truth images. These methods take inspiration from the noise2noise denoising method of Lehtinen et al. They rely heavily on the temporal redundancy of uncorrupted signals and use a neighboring frame from the degraded sequence as target in the loss. They can be considered self-supervised in the sense that the supervision signal comes from the same degraded sequence which is being restored. In some cases, results obtained with this self-supervised training match those obtained with ground truth supervision. In this talk we present a theoretical framework for analyzing this family of "frame-to-frame" methods. The proposed framework can be seen as a generalization of noise2noise that accounts for a linear degradation operator and motion between the output image and the target. We study in detail the case of the mean square error loss where a close-form expression can be found for the optimal estimator which, under certain conditions, is equivalent to supervised training. We then discuss informally the L1 loss. Finally, we review some the recent frame-to-frame methods from the literature showing how they fit within the proposed theoretical framework.

SIMCA : a simulator for Coded-Aperture Spectral Snapshot Imaging (CASSI)

Antoine Rouxel, Simon Lacroix, Antoine Monmayrant, Léo Paillet, Hervé Carfantan

LAAS, CNRS

Abstract: The image formation in coded-aperture spectral snapshot imagers (CASSI) is key information to process the acquired compressed data, and the optical system design and calibration of these instruments require great care. SIMCA is a Python-based tool built upon ray-tracing equations of each optical component to produce realist measurements of various CASSI systems. The underlying model takes into account spatial filtering, spectral dispersion, optical distortions, PSF, sampling effects, and optical misalignments.

It can be easily interfaced with image processing algorithms to assess CASSI systems performances regarding various tasks.

Nanocarb : an optically matched filter for atmospheric sounding

Yann Ferrec ? ONERA, DOTA, Palaiseau

Abstract: The common way to perform optical sounding of the atmosphere is to measure the spectrum either transmitted (in the visible and shortwave infrared range) or emitted (in the thermal infrared range) by the atmosphere, and to detect and quantify in this spectrum the absorption signature of the targeted gas. Nevertheless, when this signature has a roughly periodic pattern, measuring the whole spectrum is not required, since it is possible to design an interferometer to optically perform the correlation between the incident spectrum and this specific periodic pattern. Thus a few measures are sufficient to get most of the information about the targeted gas.

In the framework of the European project Scarbo, Onera and Université-Grenoble-Alpes designed the Nanocarb cameras according to this principle, for CO2 and CH4 detection. These cameras use a lenslet array to create a mosaic of images of the scene on the focal plane array. In front of the lenslet array is a stepped interferometric plate, so that each image is associated to a specific interferometer thickness, i.e. a specific spectral period. The use of an interferometric plate allows a very compact system, and add a parameter, the interferometer finesse (i.e. the spectral width of the peaks), to better match the gas signature.

We will present the design of these Nanocarb cameras, and the retrieval scheme, to extract gas information from the raw images.

A Physics-Inspired Deep Learning Framework for an Efficient Fourier Ptychographic Microscopy Reconstruction under Low Overlap Conditions

Lyes Bouchama, Bernadette Dorizzi, Yaneck Gottesman, Jacques Klossa

TRIBVN/T-Life et Télécom Sudparis

Abstract: Two-dimensional observation of biological samples at hundreds of nanometers resolution or even below is of high interest for many sensitive medical applications. Recent advances have been obtained over the last ten years with computational imaging. Among them, Fourier Ptychographic Microscopy is of particular interest because of its important super-resolution factor. In complement to traditional intensity images, phase images are also produced. A large set of ? raw images (with typically ? = 225) is, however, required because of the reconstruction process that is involved. We address the problem of FPM image reconstruction using a few raw images only (here, ? = 37) as is highly desirable to increase microscope throughput. In this presentation we will develop an algorithmic approach based on a physics-informed optimization deep neural network and statistical reconstruction learning. We demonstrate its efficiency with the help of simulations. The forward microscope image formation model is explicitly introduced in the deep neural network model to optimize its weights starting from an initialization that is based on statistical learning. The simulation results that will be presented demonstrate the conceptual benefits of the approach. We show that high-quality images are effectively reconstructed without any appreciable resolution degradation. The learning step is also shown to be mandatory.

Fast Diffusion EM: a diffusion model for blind inverse problems with application to deconvolution

Charles Laroche1,2, Andrés Almansa1, Eva Coupeté2

1 - MAP5 - CNRS & Université Paris Cité

2 - GoPro

Abstract: Using diffusion models to solve inverse problems is a growing field of research. Current methods assume the degradation to be known and provide impressive results in terms of restoration quality and diversity. In this work, we leverage the efficiency of those models to jointly estimate the restored image and unknown parameters of the degradation model such as blur kernel. In particular, we designed an algorithm based on the well-known Expectation-Minimization (EM) estimation method and diffusion models. Our method alternates between approximating the expected log-likelihood of the inverse problem using samples drawn from a diffusion model and a maximization step to estimate unknown model parameters. For the maximization step, we also introduce a novel blur kernel regularization based on a Plug \& Play denoiser. Diffusion models are long to run, thus we provide a fast version of our algorithm. Extensive experiments on blind image deblurring demonstrate the effectiveness of our method when compared to other state-of-the-art approaches.

Optical Diffraction Tomography Meets Fluorescence Localization Microscopy.

Emmanuel Soubies (IRIT, CNRS), Thanh-an Pham (MIT), Ferréol Soulez (CRAL), Michael Unser (EPFL)

Abstract: We show that structural information can be extracted from single molecule localization microscopy (SMLM) data. More precisely, we reinterpret SMLM data as the measures of a phaseless optical diffraction tomography system for which the illumination sources are fluorophores within the sample. Building upon this model, we propose a joint optimization framework to estimate both the refractive index map and the position of fluorescent molecules from the sole SMLM frames.

Posters

Point spread function wavefront recovery: phase retrieval with automatic differentiation

Tobías I. Liaudat, Ezequiel Centofanti, Jean-Luc Starck, Martin Kilbinger

Department of Computer Science, University College London (UCL), London, UK.

Abstract: In astronomy, upcoming space telescopes with wide-field optical instruments have a spatially varying point spread function (PSF). Specific scientific goals require a high-fidelity estimation of the PSF at target positions where no direct measurement of the PSF is provided. Even though observations of the PSF are available at some positions of the field of view (FOV), they are undersampled, noisy, and integrated into wavelength in the instrument?s passband.PSF modelling represents a challenging ill-posed problem, as it requires building a model from these observations that can infer a super-resolved PSF at any wavelength and position in the FOV.

Point spread function modelling for the Euclid mission remains a big challenge due to the specific characteristics of the mission and the extremely tight requirements of the PSF model. Even though observations of the PSF are available at some positions of the field of view, they are undersampled, noisy, and integrated into wavelength in the instrument?s passband. PSF modelling represents a challenging ill-posed problem, as it requires building a model from these observations that can infer a super-resolved PSF at any wavelength and position in the FOV. Recently, a new data-driven PSF model was proposed, WaveDiff, which showed a significant gain in performance with respect to previous data-driven PSF models. The WaveDiff framework proposes shifting the PSF modelling space from the pixels to the wavefront thanks to a differentiable optical forward model.

WaveDiff achieves a remarkably low PSF pixel reconstruction error, but the wavefront it estimates is far from the ground truth wavefront. This work is based on the WaveDiff model framework to tackle the wavefront recovery problem of the PSF field. Our objective is to estimate the PSF field wavefront representation only by exploiting the in-focus degraded stars observed in the field of view. We assume that our wavefront model can reproduce the ground truth PSF wavefront, guaranteeing a global minimum for our inverse problem. Then, we exploit the diversity from the PSF spatial variations, the WaveDiff optimisation procedure and a novel wavefront projection method to tackle the wavefront recovery problem.

Modèle photométrique RVB fin pour la co-conception optique/réseau de neurones

Marius Dufraisse, Pauline Trouvé-Peloux, Frédéric Champagnat, Jean-Baptiste Volatier

DTIS, ONERA - Université Paris-Saclay, F-91123, Palaiseau, France

Abstract: We propose to present our methods for the co-design of a lens and a neural network. We developed optical simulation methods that are differentiable and thus can be use to jointly optimise optical and neural network parameters. Our simulation models the effects of all optical parameters on both the point spread function and the sensor noise.

Spectro-spatial hyperspectral image reconstruction from interferometric acquisitions

Daniele PICONE, Mohamad Jouni, Mauro Dalla Mura

GIPSA-lab (Grenoble INP)

Abstract: In the last decade, novel hyperspectral cameras have been developed with particularly desirable characteristics of compactness and short acquisition time, retaining their potential to obtain spectral/spatial resolution competitive with respect to traditional cameras. However, a computational effort is required to recover an interpretable data cube.

For this presentation, the attention is focused on imaging spectrometers based on interferometry, for which the raw acquisition is an image whose spectral component is expressed as an interferogram. Previous works have focused on the inversion of such acquisition on a pixel-by-pixel basis within a Bayesian framework, leaving behind critical information on the spatial structure of the image data cube.

This problem is addressed by integrating a spatial regularization for image reconstruction, showing that the combination of spectral and spatial regularizers leads to enhanced performances with respect to the pixelwise case. The results are compared with Plug-and-Play techniques, as its strategy to inject a set of denoisers from the literature can be implemented seamlessly with the proposed physics-based formulation of the optimization problem.

Reconstruction of Spectra from Interferometric Measurements

Mohamad JOUNI, Daniele PICONE, Mauro DALLA MURA

Grenoble INP, Université Grenoble Alpes

Abstract: Spectral information of the scene can be reconstructed with high resolution by processing observations acquired by interferometric devices. For devices based on the interference between multiple waves (e.g., Fabry-Perot etalons), processing the spectrum as an inverse Fourier transform (e.g., for Michelson-like interferometers) is often not sufficiently accurate, as the transmittance response of the system is better described by the Airy distribution and due to the ill-posedness of the problem. In this paper, we approach the spectrum reconstruction with a model-based approach within a Bayesian framework and represent the system through an infinity-wave model, as we have a good knowledge of the system. Specifically, we propose to use Loris-Verhoeven algorithm with proximal solvers, which provide more freedom in exploiting and representing the a priori knowledge of the system and the data, such as induced sparsity on a transformed domain. Our proposal is more flexible and robust to noise compared to conventional reconstruction algorithms, as demonstrated by experiments validated on simulated and real measurements.

Impact of training data on LMMSE demosaicing for Colour-Polarization Filter Array

Dumoulin RONAN, Pierre-Jean Lapray, Jean-Baptiste Thomas, Ivar Farup

Université de Haute-Alsace, IRIMAS EA 7499

Abstract: The SONY IMX250 MYR is the most common colour-polarization filter array sensor which is commercially available. It is a 12-channel sensor which combines three color filters arranged in a Quad Bayer spatial arrangement, and four polarization angles of analysis equally-distributed between 0° and 180° (? = 0°, 45°, 90°, 135°). For each pixel position, only one intensity measurement out of the twelve is made, so the other eleven channel values are missing.

To get a full resolution image, a demosaicing algorithm is applied to the images. Linear minimum mean square error is a machine learning algorithm, it can be used to demosaic images from a colour-polarization filter array sensor. To understand the role of training data on its performance, I study the model selection using cross-validation techniques. The results show that the training model converges quickly, and that there is no significant difference in training the model with more than 12 images of approximately 1.5 megapixels. I also found that the selected trained model performs better compared to EARI demosaicing algorithm in terms of peak signal-to-noise ratio.

Variational autoencoders for domain shift. Application to air low-cost sensors

Aymane SOUANI, Vincent Vigneron, Hichem Maaref

IBISC EA 4526, équipe SIAM

Abstract: Air pollutant Low-cost sensors (LCSs) have gained popularity due to their affordability and potential for widespread deployment. However, their reliability and accuracy can be a subject of concern, which can impact the quality of the data collected. To address this challenge, incorporating real data into the calibration process is necessary. Our research delves into the importance of real data in the calibration of air pollutant LCSs and highlights the significance of variational autoencoders (VAEs) forextracting separable features based on domains and enabling the practical use of these sensors.

Annotation-free quality Score for segmentation and tracking in 3D Live Fluorescence Microscopy

Jules Vanaret, Victoria Dupuis, Pierre-Francois Lenne, Frederic Richard, Sham Tlili, Gaudenz Danuser, Philippe Roudot

Institut Fresnel Marseille

Abstract: Particle tracking is a ubiquitous task in the study of dynamic molecular and cellular processes through microscopy. Light-sheet microscopy has opened a path to acquiring complete cell volumes for investigation in 3-dimensions (3D). However, quantitative analysis and microscope calibration have remained difficult due to fundamental challenges in the verification of large and dense sets of 3D particle trajectories. As such, new software tools are required that allow microscopists to automatically track diverse particle movements in 3D, inspect the resulting trajectories in an informative manner, and receive unbiased assessments of the quality of trajectories.

We introduced u-track3D, a software package that extends the versatile u-track framework established in 2D to address the specific challenges of 3D particle tracking. During this talk, we will particularly focus an estimator of trackability that learns the dynamic parameters of each cellular objects to detect inconsistencies in local displacements induced by segmentation errors, density or unexpected events. We demonstrate the high precision of our approach on simulations, experimental datasets as well as an application to the comparison of deep learning methods for cell segmentation.

Date : 2023-11-20

Lieu : Institut Henri Poincaré, 11 rue Pierre et Marie Curie, Paris, Amphithéâtre Hermite


Thèmes scientifiques :
B - Image et Vision

Inscriptions closes à cette réunion.

Accéder au compte-rendu de cette réunion.

(c) GdR IASIS - CNRS - 2024.