Download PDFOpen PDF in browser

VPN: Verification of Poisoning in Neural Networks

EasyChair Preprint no. 8687

12 pagesDate: August 17, 2022


Neural networks are successfully used in many domains including safety and security critical applications. As a result researchers have proposed formal verification techniques for verifying neural network properties. A large majority of previous efforts have focused on checking local robustness in neural networks. We instead focus on another neural network security issue, namely data poisoning, whereby an attacker inserts a trigger into a subset of the training data, in such a way that at test time, this trigger causes the classifier to predict some target class. In this paper, we show how to formulate absence of data poisoning as a property that can be checked with off-the-shelf verification tools, such as Marabou and nneum. Counterexamples of failed checks constitute potential triggers that we validate through testing. We further show that the discovered triggers are ‘transferable’ from a small model to a larger, better-trained model, allowing us to analyze state-of-the art performant models trained for image classification tasks.

Keyphrases: formal verification, neural networks, Poisoning Attacks

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Youcheng Sun and Muhammad Usman and Divya Gopinath and Corina Păsăreanu},
  title = {VPN: Verification of Poisoning in Neural Networks},
  howpublished = {EasyChair Preprint no. 8687},

  year = {EasyChair, 2022}}
Download PDFOpen PDF in browser