Verification of Neural Networks

Summary

Verification of neural networks is an emerging field aimed at soundly verifying input-output properties of complex deep learning models. This area encompasses both deterministic and probabilistic neural networks, with recent algorithms drawing from reachability analysis, optimization, and search techniques. For deterministic networks, the focus is on verifying specific properties across all possible inputs. In contrast, verification of deep probabilistic models introduces a novel framework that considers the probabilistic nature of outputs, requiring that certain constraints are satisfied with high probability over latent variable sampling and for all conditioning inputs. Recent advancements have led to efficient methods for obtaining rigorous lower bounds on constraint satisfaction probabilities, enabling the verification of important properties such as monotonicity and convexity in functional spaces. These verification techniques are crucial for ensuring the safe deployment of neural networks in various applications, from computer vision to machine translation and functional regression.

Research Papers