Vous êtes ici : Réunions » Réunion

Identification

Identifiant: 
Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?

Analyse forensique de données multimédia

Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.

Inscriptions closes à cette réunion.

Inscriptions

26 personnes membres du GdR ISIS, et 13 personnes non membres du GdR, sont inscrits à cette réunion.
Capacité de la salle : 70 personnes.

Annonce

Avec l'essor et la large disponibilité d'outils professionnels d'édition, ainsi que la recrudescence de méthodes basées sur l'apprentissage profond, falsifier des données multimédia est aujourd'hui relativement aisé et accessible.

L'analyse forensique vise à vérifier l'authenticité et l'intégrité des données multimédia, c'est-à-dire s'assurer qu'elles n'ont pas subi de modifications. Ce domaine de recherche, bien qu'important pour la société, est difficile sur le plan technique.

Les falsifications continuent de proliférer en partie à cause des limites des technologies de détection des falsifications et à cause des variabilités des supports (images, vidéos, sons) et à cause des traitements très spécifiques (par exemple, les différents niveaux de compression d'images ou les différentes méthodes de compression). Il est devenu trivial pour les contrefacteurs de réaliser des falsifications parfaites (comme les avancées récentes dans la génération des images et vidéos de type deepfakes).

L'objectif de cette journée est de réunir les acteurs de l'analyse forensique de données multimédia, issus des mondes académique et industriel, afin d'échanger sur les avancées et les défis dans ce domaine.

Les sujets d'intérêt comprennent les thèmes suivants (sans s'y limiter) :

Cette réunion est co-labellisée GdR ISIS - GdR Sécurité.

Organisateurs

P. PUTEAUX (pauline.puteaux@cnrs.fr), V. ITIER (vincent.itier@imt-nord-europe.fr), I. TKACHENKO (iuliia.tkachenko@liris.cnrs.fr).

Programme

09h20 : Ouverture de la journée

09h30 - 9h55 : Nicolas Larue - "One-class generalized deepfake detection with bounded contrastive learning on curved spaces", ETIS - CY Cergy Paris University, ENSEA, CNRS, France & Faculty of Electrical Engineering, University of Ljubljana, Slovenia

9h55 - 10h20 : Mohamed Mehdi Atamna - "Cross-manipulation deepfake detection with temporal and high-frequency spatial features", LIRIS, Université Lyon 2

10h20 - 10h45 : Yanhao Li - "Detection of video double compression and its preliminary application towards deepfake detection", Centre Borelli, ENS Paris Saclay, Université Paris Saclay

10h45 - 11h00 : Pause café

11h00 - 12h00 : Orateur invité Kai Wang - "Digital Image Forensics: Handcrafted, Statistical-Model-Based and Deep-Learning-Based Approaches", GIPSA-lab, CNRS

14h00 - 15h00 : Oratrice invitée Luisa Verdoliva - "Detecting Deepfakes", Multimedia Forensics Lab, University Federico II of Naples

15h00 - 15h15 : Pause café

15h15 - 15h40 : Matthieu Delmas - "Étude sur la détection de deepfakes dans l'espace latent de StyleGAN", CentraleSupélec, IETR, IRT b<>com

15h40 - 16h05 : Rony Abecidan - "Unsupervised Domain Adaptation for Practical Digital Image Forensics", CRIStAL, Université de Lille

16h05 - 16h30 : Théo Taburet - "Forgery detection and localization in documents stored as images", Laboratoire Informatique, Image et Interaction

16h30 - 16h55 : Pauline Puteaux - "FuzzyDoc project: Ensuring printed document integrity using crossing numbers", CRIStAL, Université de Lille

16h55 : Conclusion de la journée

Résumés des contributions

Oratrice invitée Luisa Verdoliva - "Detecting Deepfakes", Multimedia Forensics Lab, University Federico II of Naples

In recent years there have been astonishing advances in AI-based synthetic media generation. Thanks to deep learning-based approaches it is now possible to generate data with a high level of realism. While this opens up new opportunities for the entertainment industry, it simultaneously undermines the reliability of multimedia content and supports the spread of false or manipulated information, such as the well-known Deepfakes. In this context, it is important to develop automated tools to detect manipulated media in a reliable and timely manner. This talk will describe the most effective deep learning-based approaches for detecting deepfakes, with a focus on those that enable domain generalization. The results will be presented on challenging datasets with reference to realistic scenarios, such as the dissemination of manipulated images and videos on social networks.

Orateur invité Kai Wang - "Digital Image Forensics: Handcrafted, Statistical-Model-Based and Deep-Learning-Based Approaches", GIPSA-lab, CNRS

With the increasing popularity of image acquiring devices, easy-to-use image editing software tools and on-line social networking services, now it is common for us to see image forgeries in both professional and personal contexts. Accordingly, more and more attention has been paid to the research of digital image forensics as a means of image authenticity verification. This presentation can be considered as a summary of the research activities on image forensics conducted at GIPSA-lab over the last ten years, through which we show the advancement of this research field and the big change of the research methodology. We begin with the presentation of some traditional handcrafted methods based on specifically designed algorithms and/or features. Then we present our proposed methods based on image statistical models. This kind of approach has received relatively less attention among the research community but sometimes can achieve decent forensic performances while being simple and flexible. At last, we have developed several forensic methods leveraging the deep-learning approach, focusing on the practical utility of the proposed method or with a special signal processing/analysis perspective. Different image forensic problems are considered when working on these three kinds of approaches.

Nicolas Larue - "One-class generalized deepfake detection with bounded contrastive learning on curved spaces", ETIS - CY Cergy Paris University, ENSEA, CNRS, France & Faculty of Electrical Engineering, University of Ljubljana, Slovenia

Modern deepfake detectors have achieved encouraging results, when training and test images are drawn from the same data collection. However, when these detectors are applied to images produced with unknown deepfake-generation techniques, considerable performance degradations are commonly observed. In this talk, we present our two novel deepfake detectors which generalize better to unseen deepfakes by formalizing the deepfake detection problem as a (one-class) out-of-distribution detection with self-supervised learning. The first detector called SeeABLE (Soft Discrepancies and Bounded Contrastive Learning for Exposing Deepfakes) generates local image perturbations (referred to as soft-discrepancies) and then pushes the perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss. The second and more efficient detector called CTru (short for "ConTrastive learning in opposite-cuRvaUre space", and pronounced as "see true") learns richer representation in both hypersphere and hyperbolic spaces of different curvatures for an optimal modeling the face's intrinsic geometry.

The two detectors achieve new state-of-the-art results in various benchmarks.

Mohamed Mehdi Atamna - "Cross-manipulation deepfake detection with temporal and high-frequency spatial features", LIRIS, Université Lyon 2

Although current deep learning-based facial deepfake detectors achieve excellent results when tested on deepfakes generated using known methods, their performance drops significantly when exposed to fake images or videos made using methods unseen during training. We show in this presentation how image noise residuals and temporal feature exploitation can improve deepfake detection performance in a cross-manipulation scenario.

Yanhao Li - "Detection of video double compression and its preliminary application towards deepfake detection", Centre Borelli, ENS Paris Saclay, Université Paris Saclay

Video double compression detection can provide important clues to recover the video editing and sharing history. Indeed, to manipulate a video such as a deepfake video, one must first decompress it into frames, then perform the desired editions, and finally recompress the retouched frames into a video. In this talk, I will present a method of detecting video double compression in H.264 codec, which detects the temporal periodicity of frame residuals caused by the fixed GoP in the first compression and validates the detections using "a Contrario" framework to control the number of false alarms (NFA). The application of the double compression detector towards deepfake video detection will be discussed with some preliminary results.

Matthieu Delmas - "Étude sur la détection de deepfakes dans l'espace latent de StyleGAN", CentraleSupélec, IETR, IRT b<>com

Depuis plusieurs années les méthodes de trucage vidéo à base d'intelligence artificielle (deepfakes ou hypertrucages) se multiplient. Des modèles d'apprentissage automatique pour la détection existent, mais ils ne fonctionnent de manière optimale que dans un contexte donné (méthode de falsification, nombre et contexte des exemples...).

Pour pallier ces problèmes, nous proposons de projeter les images suspectes dans un espace de plus faible dimension (l'espace latent de StyleGAN) pour faciliter la mise au point de modèles de détection.

Les expériences réalisées jusqu'à maintenant confirment que cette démarche est pertinente, et permet d'atteindre des précisions comparables à l'état de l'art tout en nécessitant moins d'exemples d'entraînement.

Rony Abecidan - "Unsupervised Domain Adaptation for Practical Digital Image Forensics", CRIStAL, Université de Lille

In the growing context of fake news, digital images are easily generated or tampered in order to change their meaning. If multimedia forensics schemes can be very effective at detecting deepfake images and image tamperings, they are very often extremely sensitive to the very nature of the analyzed signal. For image forensics schemes relying on machine learning, this means that if the image database used for training (a.k.a. the source) does not exactly undergo the same development pipeline as the scrutinized test images (a.k.a. the target), the performance of the detector can then be jeopardized. This presentation exhibits domain adaptation strategies to mitigate the heterogeneity issue of deep-learning forensics schemes, updating the network trained on the source set using a backpropagation mechanism. Our study shows that domain adaptation for different forensics sources is possible and efficient using distances between distributions from reproducing kernels or optimal transport.

Théo Taburet - "Forgery detection and localization in documents stored as images", Laboratoire Informatique, Image et Interaction

As it comes to scanned PDF documents, it is common that the document consists of a JPEG or a PNG image encapsulated into a PDF. Forgery detection among this kind of data can be challeging as documents are quite poor images in term of semantics content or noise. In this presentation we propose a deep learning approach for detecting document forgery using a convolutional neural network (CNN) under data scarcity scenario. The network utilizes both Spatial Rich Model (SRM) and JPEG volumetric encoding to learn forgery artifacts in both the spatial and Discrete Cosine Transform (DCT) domain. The input spatial image is pre-processed using high-pass filters to extract the noise component residuals, which is observed to help the CNN converge faster. The proposed CNN architecture is trained on a large synthetic proprietary dataset of images containing both authentic and forged images. The trained CNN is able to accurately detect image forgery, even in the presence of a large number of easy background examples.

Pauline Puteaux - "FuzzyDoc project: Ensuring printed document integrity using crossing numbers", CRIStAL, Université de Lille

Nowadays, with the use of photo-editing software being mainstream, document integrity verification has become crucial. As we have seen during the pandemic, most administrative documents are printed and then scanned before being transmitted, making these documents noisy. Indeed, a printed and scanned document undergoes geometric transformations, as well as the addition of black spots, not to mention a decrease in color intensity. The relevant features of an original document, which will be matched against a query document, are stored to be used as a template. We propose a 2-step method that compares a template with a query document to ensure that the query document has not been tampered with. Our method first reverts geometric transformations the document underwent, and then extracts the crossing numbers in that image. A Euclidean distance based matching method is applied to the two sets of crossing numbers, and abnormally distant point groups are flagged as potentially modified. A second step in our method is then applied to analyze the statistical properties of these distance values, to ensure that the document has not been altered. Our results when we apply our method to a database containing administrative documents and tampered versions of these documents - all of which underwent a print and scan process - show the validity of our considerations.

Date : 2023-05-16

Lieu : : Université Lumière Lyon 2 - Campus Berges du Rhône, Bât. CLIO, CLI.036, 4 Rue de l'Université, 69007 Lyon. Seuls les participants inscrits pourront accéder à la salle.


Thèmes scientifiques :
D - Télécommunications : compression, protection, transmission

Inscriptions closes à cette réunion.

Accéder au compte-rendu de cette réunion.

(c) GdR IASIS - CNRS - 2024.