Security and Privacy in Deep Learning
Summary
Security and privacy are critical concerns in deep learning, encompassing both model security and data privacy. Model security focuses on protecting the integrity and efficiency of deep neural networks (DNNs) against malicious attacks, which can be categorized as poisoning attacks during training or evasion attacks during inference. Defenses against these attacks include techniques to identify and remove malicious data, train models to be resilient to adversarial examples, and obfuscate model structures. Data privacy is equally important, as training data can be vulnerable to attacks like model-inversion or misuse by dishonest service providers. To address these privacy concerns, researchers have proposed solutions that integrate techniques such as differential privacy and modern cryptography, including homomorphic encryption. These methods aim to protect sensitive information while maintaining the functionality of deep learning models, though they come with their own set of advantages and challenges in implementation.