Vous êtes ici : Kiosque » Annonce

Identification

Identifiant: 
Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?

Annonce

21 janvier 2024

Deep learning and tensor decomposition for the analysis of multimodal imaging in brain imagery and advisable AI.


Catégorie : Post-doctorant


 

According to the World Health Organization, stroke is the second cause of death and the leading cause of chronic functional disability in adults, with 17 million victims, 31% of whom were under the age of 65. More than 6 million people die from stroke worldwide each year.In France, each year around 150,000 people are hospitalized for a stroke, one every 4 minutes with an average cost of 19 k€. It is estimated that 750,000 people have survived a stroke, two thirds of which will have disabling consequences, which represents a financial burden for the state of around 2.8 billion €/year... in reality 10 billion over 5 years due to the cost of handicap. The ischemic stroke is caused by a blood clot (thrombus) that blocks a brain artery causing lack of oxygen brain tissue supplied by that artery (Fig. 1). There is an urgent need to diagnose and determine if a treatment with thrombolytic drugs (anti-coagulants) can “reverse” the stroke. The response time is limited and should not exceed 3 to 4 hours after the onset of symptoms. Confronted with the management of a stroke, the doctor then asks 3 questions to which the imagery provides particularly relevant answers: is it really a stroke? Is the stroke ischemic or hemorrhagic in nature? If thrombolysis is considered, are there any radiological contraindications to this treatment? There is consensus that magnetic resonance imaging (MRI) is the gold standard for eliminating non-vascular diagnoses because of its sensitivity and specificity in acute ischemia. Hospital reception therefore favors the speed of access to the neurovascular unit (NVU) and MRI to confirm the diagnosis of cerebral infarction or cerebral hemorrhage: early treatment (< 4,5 h) limits the severity of the sequelae. If MRI makes it possible to search for the cause of the lesion, it raises many methodological difficulties linked to the very progressive pathophysiology of stroke in the very first hours. There has not been until nom a complete automatic tool for the simultaneous segmentation of lesions to date.

Objectives

The solution that we implement is based on the automatic segmentation of the infarct's areas and ischemic tissues at risk. The application of AI and neural networks to the analysis of images makes it possible to work on a large amounts of data in a more relevant way than conventional image processing methods. But the price to pay is in too long simulation time and interpretability globally. We propose several solutions. First, the latent space of NN layers can be structured in tensor form, which provides very good performance [Pan21]. It has been shown that this allows a compromise in terms of performance and interpretability. However, this preliminary work leaves significant room for progress, and the properties of this type of hybrid model are still poorly understood. Automatic learning on tensor data is classically carried out by linear tensor decomposition, for example CPD/PARAFAC or Tucker [Sid17]. Recently, tensor representations have been integrated into neural networks and have enabled significant developments in deep learning, particularly in the field of images, by reducing the number of parameters to be estimated. To increase the identifiability and interpretability of deep neural models, constraints are added,for example non-negativity, classic in a matrix and tensor learning framework [Kol08]. Another issue is the confidence of the neurologist in the segmentation of the ROI. Doctors might choose to rely on their expertise until AI solutions prove to be consistently reliable and widely validated. Doctors can interpret the context of the entire patient case. They consider not only the imaging data but also the patient's medical history, symptoms, and other relevant clinical information. This holistic approach is crucial for accurate diagnosis and treatment planning. This is why advisable AI will help integrating AI into medical practice. Advisable AI recognizes the value of human clinical judgment and expertise. Instead of replacing doctors, it assists them by providing suggestions and insights. Advisable AI recognizes these variations due to factors like imaging equipment differences, patient characteristics, and variations in imaging protocols and acknowledges the potential limitations of automatic segmentation algorithms. Human experts can adjust their approach based on this understanding. At last, longitudinal studies of stroke patients allow to observe the natural progression of stroke over time and gain comprehensive insights into the trajectory of the disease and its treatment. But also to evaluate the long-term impact of different treatments, rehabilitation strategies, medications and long-term complications, including cognitive decline, motor impairments, and psychological issues. Longitudinal studies enable the identification of biomarkers associated with stroke recovery. This knowledge is crucial for developing targeted therapies and precision medicine approaches tailored to individual patient profiles. The objectives are to validate the results on multi-center large patient databases and to integrate the model into clinical application software.

Work program

The difficulty of obtaining a sufficient amount of reliable class-specific training data for a supervised automatic approach requires the study of new strategies. First of all, we will establish a benchmark of the different approaches. Then we will modify the constraints which structure the tensor decomposition in an auto-encoder/Tucker decomposition type model. We will evaluate and compare the characteristics of several deep NN architectures. The new architectures will be improved with an advisable Ai implementation. Then the longitudinal study will be set and biomarkers will be studied. A solution suggested by very recent studies [Bra19] proposes to develop new generic salience functions or to use the data augmentation method to build a robust classification as well as other parameters such as texture or shape. Evaluating the new procedure against a referenced procedure raises many methodological difficulties. The expected performance indicators are 1. the repeatability of the (deterministic) segmentation process in a degraded situation or not, 2. the efficiency of the tool to be tested on a ground truth basis and quantified with DICE [3] to measure performance in segmentation, 3. the speed of inference, including the normalization of the MR images.

Profile and skills required

The recruited person will be a PhD in computer sciences/AI, able to understand and develop adaptive learning algorithms and to process medical datasets and use them in an operational system to achieve the mission described above.

Programming skills: A practice of Tensorflow and Pytorch is mandatory. French is not an issue.

His(her) English is fluent. The work will be carried out at the IBISC Laboratory located on the Evry campus of the UPSaclay. IBISC develops multidisciplinary, theoretical and applied research in the field of information sciences and engineering, with a strong orientation towards health applications. The selected candidate will have the chance to work in an interdisciplinary team and with a consortium of data scientists and clinicians from the CHSF hospital. The project is multidisciplinary, at the interface of machine learning, computer science and medicine.

Contact

vincent.vigneron@ibisc.univ-evry.fr

 

Dans cette rubrique

(c) GdR IASIS - CNRS - 2024.