Adversarial Machine Learning

Summary

Adversarial machine learning focuses on studying vulnerabilities in ML systems and developing techniques to make them more robust against adversarial attacks. Key aspects include:

  1. Identifying and generating adversarial examples - inputs that are minimally perturbed but cause misclassification. This includes targeted and untargeted attacks across various threat models (white-box, black-box, etc.).

  2. Developing defenses and training techniques to improve model robustness, such as adversarial training, certified defenses, and randomized smoothing.

  3. Evaluating robustness across different perturbation types, sizes, and out-of-distribution scenarios.

  4. Studying transferability of attacks and defenses across models and domains.

  5. Analyzing the geometry and manifold structure of adversarial examples.

  6. Exploring adversarial vulnerabilities in real-world applications like medical imaging.

  7. Developing formal verification methods to provide robustness guarantees.

  8. Investigating the interplay between adversarial robustness and other desirable properties like accuracy and generalization.

Overall, this is an active area of research aiming to improve the security and reliability of ML systems against adversarial threats.

Research Papers