Backdoors on Manifold Learning

Christina Kreza, Stefanos Koffas, Behrad Tajalli, Mauro Conti, Stjepan Picek

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

80 Downloads (Pure)

Abstract

Recently, attackers have targeted machine learning systems, introducing various attacks. The backdoor attack is popular in this field and is usually realized through data poisoning. To the best of our knowledge, we are the first to investigate whether the backdoor attacks remain effective when manifold learning algorithms are applied to the poisoned dataset. We conducted our experiments using two manifold learning techniques (Autoencoder and UMAP) on two benchmark datasets (MNIST and CIFAR10) and two backdoor strategies (clean and dirty label). We performed an array of experiments using different parameters, finding that we could reach an attack success rate of 95% and 75% even after reducing our data to two dimensions using Autoencoders and UMAP, respectively.

Original languageEnglish
Title of host publicationWiseML 2024 - Proceedings of the 2024 ACM Workshop on Wireless Security and Machine Learning
PublisherACM
Pages1-7
Number of pages7
ISBN (Electronic)9798400706028
DOIs
Publication statusPublished - 2024
Event2024 ACM Workshop on Wireless Security and Machine Learning, WiseML 2024 - Seoul, Korea, Republic of
Duration: 30 May 2024 → …

Publication series

NameWiseML 2024 - Proceedings of the 2024 ACM Workshop on Wireless Security and Machine Learning

Conference

Conference2024 ACM Workshop on Wireless Security and Machine Learning, WiseML 2024
Country/TerritoryKorea, Republic of
CitySeoul
Period30/05/24 → …

Keywords

  • autoencoders
  • backdoor attacks
  • manifold learning
  • umap

Fingerprint

Dive into the research topics of 'Backdoors on Manifold Learning'. Together they form a unique fingerprint.

Cite this