Neurify
Neurify copied to clipboard
Does Neurify work with simple Neural Nets?
We are trying to verify a very simple neural network that is attached.
We already changed functions "check_max" (file split.c line 58) and "check_max1" (file split.c line 104) for the property described on the attached image "f<=5.5". We set intervals as the same of example ([0,1], [0,1]) (file nnet.c line 539), but we are not having the expected result.
I don't see a reason why this shouldn't work, but you might have to change more. Neurify identifies the gold target label g
, and tries to find a different label g' != g
that could be used for the adv. ex. In your example, you only have one output node, so I guess Neurify will always return "UNSAT", as no adv. ex. exists.
To test for such safety properties, the first idea that comes to my mind would be to add an auxiliary output node that has weights 0 and bias 5.5. Then this node would always be 5.5, and would be seen as the target label. Neurify would then try to find input assignments that cause f
to exceed 5.5.
If I didn't understand your question, it might help if you could create a patch for the changes you did so I can replicate them locally, as well as give me an example input/output that's not as expected.
Hello Mrs. Brix,
Thank you for replying.
This simple Neural Network doesn´t deal with labels. Is there a way to verify this simple neural net with a safety property as described by the picture above without changing its archtecture? Actually i got this Neural Network from a Neurify poster https://www.cs.columbia.edu/~tcwangshiqi/docs/Neurify_poster.pdf and I just would like to better comprehend the verifying process. I will do a pull request with the changes so you could better see what i am doing.
I've never used Neurify in this context (I'm not the inventor, but started working on an improved version). Maybe @tcwangshiqi-columbia could clarify whether Neurify supports safety properties out of the box? Also, what's the behavior you're seeing?
To clarify, when I mention adding an auxiliary output node, I'm not talking about changing the model during training. Only for the verification, this could be used as a hack to avoid changing more Neurify code. But I can see how that might become more complicated when your safety properties become more complex. If I can come up with an idea how to validate safety properties more elegantly, I'll let you know.
Thanks for reaching out and thank Christopher for the explanations! Our Neurify can definitely support that example.
Have you customized check_functions and check_functions1 to be output->upper_matrix.data[i]<=5.5? They are called in the main thread instead of check_max. Essentially, our "forward_prop_interval_equation_conv_lp" function will propagate the equation through the network and get the equation of the output node (f=3x+y) under certain constraints (2x-3y>0). Then we concretize the output equation to be a concrete interval and check for the property (3x+y<=5.5). For better understanding, you can find a good example of our symbolic interval in our ReluVal paper (https://arxiv.org/abs/1804.10829).
To debug, you can just check whether A's equation is 2x-3y, B's is x+4y, and f's is 3x+y after applying constraint 2x-3y>0. If you still get stuck, feel free to contact me at [email protected].
Thanks, Shiqi