Download PDFOpen PDF in browser

Adversarial Robustness and Defense Mechanisms in Machine Learning

EasyChair Preprint 14442

11 pagesDate: August 14, 2024

Abstract

As machine learning (ML) systems are increasingly deployed in critical applications, their vulnerability to adversarial attacks—where small, crafted perturbations can drastically alter model outputs—poses significant security concerns. This research explores the development of adversarial robustness and defense mechanisms to protect ML models from such attacks. The study investigates various types of adversarial attacks, including evasion, poisoning, and extraction, and evaluates the effectiveness of different defense strategies, such as adversarial training, defensive distillation, and robust optimization. By enhancing the resilience of ML models against adversarial inputs, this research aims to ensure the reliability and security of ML systems in real-world environments. The findings contribute to the broader field of secure AI by offering insights into the trade-offs between model performance and robustness, as well as providing guidelines for implementing effective defense mechanisms in diverse applications, from autonomous systems to financial security.

Keyphrases: Machine Learning Security, Model resilience, Secure AI, adversarial attacks, adversarial robustness, adversarial training, defense mechanisms, detection, robust optimization

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:14442,
  author    = {Axel Egon},
  title     = {Adversarial Robustness and Defense Mechanisms in Machine Learning},
  howpublished = {EasyChair Preprint 14442},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser