Vous êtes ici : Réunions » Réunion

Identification

Identifiant: 
Mot de passe : 

Mot de passe oublié ?
Détails d'identification oubliés ?

Filtrage de contenus sensibles et sécurité des méthodes d'apprentissage

Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.

Inscriptions closes à cette réunion.

Inscriptions

9 personnes membres du GdR ISIS, et 15 personnes non membres du GdR, sont inscrits à cette réunion.
Capacité de la salle : 58 personnes.

Annonce

Les méthodes d'apprentissages, lorsqu'elles sont utilisées pour filtrer ou détecter des informations sensibles (les messages cachés, modification d'un contenu, authentification d'une personne ou d'un objet, détection de SPAM, ...), peuvent être mises à mal par un adversaire qui cherchera par exemple à détériorer leurs performances. La base de données d'apprentissage peut par exemple être corrompue afin de rendre l'étape d'apprentissage inefficace. La connaissance du classifieur peut permettre de générer facilement des faux-positifs de bonne qualité. Inversement il est également possible de prendre en compte ces attaques afin de sécuriser le système d'apprentissage.

L'objectif de cette journée sera dans un premier temps de dresser un panorama sur la problématique de l'apprentissage adversairiel puis de présenter des applications liées à ce contexte. La journée commencera par une présentation du Dr. Batista Biggio, chercheur à l'Université de Cagliari et expert dans le domaine.

Les membres de la communauté du GdR ISIS (ainsi que du GdR Madics et du pré-GdR Sécurité) seront ensuite invités à présenter leurs travaux implicitement ou explicitement liés aux attaques de systèmes d'apprentissage, de classification ou de recommandation, nous pouvons citer à titre d'exemple :

Programme

9:45-10:00 Patrick Bas, Univ. Lille, CNRS, Centrale Lille, Acceuil et Introduction

10:00-11:00: Battista Biggio, University of Cagliari, Machine Learning under Attack: Vulnerability Exploitation and Security Measures Voir résumé et bio ci-dessous

11:00-11:30: Patrick Bas, Univ. Lille, CNRS, Centrale Lille, From Adversarial (Deep) Learning to Cryptography and Steganography: a review of two recent papers

11:30-12:00: Charlotte Pelletier, CNES, A study of class label noise effects on supervised learning algorithms for land cover mapping 12:00-13:30 Repas

14:00-14:30: Hervé Chabanne, Jonathan Milgram, Emmanuel Prouff and Constance Morel, SAFRAN IDENTITY AND SECURITY, Privacy-Preserving Classification on Deep Neural Network

14:30-15:00: Jean-François Couchot, Raphaël Couturier, Christophe Guyeux et Michel Salomon, FEMTO-ST Institute, UMR 6174 CNRS - University Bourgogne Franche-Comté, Belfort, Deep Learning and Features based Steganalysis: some Practical Considerations.

15:00-15:30: Lionel Pibre, Jérôme Pasquet, Dino Ienco, and Marc Chaumont, LIRMM, Montpellier: Deep Learning et la steganalyse : retour d'expérience

15:30-16:00: Anh Thu Phan Ho, Kai Wang, François Cayre, GIPSA-LAB CNRS, An effective histogram-based approach to JPEG-100 forensics

16:00-16:30 Dirk Borghys, Ecole Royale Militaire - Département de Mathématique , Development of an Intelligent Framework for Steganography and Steganalysis

16:30-17:00: Damien Ligier, Sergiu Carpov, Caroline Fontaine, Renaud Sirdey, CEA LIST , Privacy preserving data classification using inner-product functional encryption

Résumés des contributions

Machine Learning under Attack: Vulnerability Exploitation and Security Measures


Abstract. Learning to discriminate between secure and hostile patterns is a crucial problem for species to survive in nature. Mimetism and camouflage are well-known examples of evolving weapons and defenses in the arms race between predators and preys. It is thus clear that all of the information acquired by our senses should not be considered necessarily secure or reliable. In machine learning and pattern recognition systems, however, we have started investigating these issues only recently. This phenomenon has been especially observed in the context of adversarial settings like malware detection and spam filtering, in which data can be purposely manipulated by humans to undermine the outcome of an automatic analysis. As current pattern recognition methods are not natively designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specifc vulnerabilities that an attacker may exploit either to mislead learning or to evade detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on learning algorithms has thus been one of the main open issues in the novel research field of adversarial machine learning, along with the design of more secure learning algorithms.

In the first part of this talk, I introduce a general framework that encompasses and unifies previous work in the field, allowing one to systematically evaluate classifier security against different, potential attacks. As an example of application of this framework, in the second part of the talk, I discuss evasion attacks, where malicious samples are manipulated at test time to evade detection. I then show how carefully-designed poisoning attacks can mislead some learning algorithms by manipulating only a small fraction of their training data. In addition, I discuss some defense mechanisms against both attacks in the context of real-world applications, including biometric identity recognition and computer security. Finally, I briefly discuss our ongoing work on attacks against clustering algorithms, and sketch some promising future research directions.

Author's bio:
Battista Biggio received the M.Sc. degree (with honors) in Electronic Engineering (2006) and the Ph.D. degree in Electronic Engineering and Computer Science (2010) from the University of Cagliari (Italy). Since 2007, he has been with the Department of Electrical and Electronic Engineering of the University of Cagliari, where he is currently a post-doctoral researcher. In 2011, he visited the University of Tuebingen (Germany), and worked on the security of learning algorithms to training data contamination. His research interests include secure machine learning, multiple classifier systems, kernel methods, computer security and biometrics. On these topics, he has published more than 50 papers on international conferences and journals, collaborating with several research groups from academia and companies throughout the world. Dr. Biggio has also recently co-founded a company named Pluribus One, where he is responsible of leveraging machine-learning algorithms to drive product innovation. He regularly serves as a reviewer and program committee member for several international conferences and journals on the aforementioned research topics. Dr. Biggio is a member of the IEEE and of the IAPR.

Date : 2016-11-28

Lieu : Télécom ParisTech - Amphithéâtre Jade


Thèmes scientifiques :
D - Télécommunications : compression, protection, transmission

Inscriptions closes à cette réunion.

(c) GdR IASIS - CNRS - 2024.