auto_LiRPA icon indicating copy to clipboard operation
auto_LiRPA copied to clipboard

error with torch.pow(output1,output2)

Open mlpotter opened this issue 1 year ago • 4 comments

Describe the bug I am trying to replicate a Weibull distribution with a Neural Network. This requires taking the Neural Network outputs, $\lambda$ and $k$ and raising $\lambda$ to the power of $k$ such as $\lambda^k$. In PyTorch, this is expressed as torch.pow(rate,k) . I create a Neural Network with just this simple equation and try auto-LiRPA compute bounds, but it errors out. If I use the power operator with some constant, such as torch.pow(rate,2.0), there is no error.

To Reproduce Please provide us with the following to receive timely help:

import torch.nn as nn
import torch
import torch.nn.functional as F
from auto_LiRPA import BoundedModule, BoundedTensor
from auto_LiRPA.perturbations import *

class NeuralNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.logit_k = nn.Parameter(torch.FloatTensor([[0.0]]))
        self.linear1 = nn.Linear(10,10)
        self.linear2 = nn.Linear(10,1)

    def rate(self,x):
        x = self.linear1(x)
        x = F.relu(x)
        x = self.linear2(x)

        return torch.exp(x)
        a
    def forward(self,x):
        rate_ = self.rate(x)

        return torch.pow(rate_,torch.exp(self.logit_k))

Nb = 15; Nf = 10;
Xb = torch.randn(Nb,Nf)

model = NeuralNet()
ratek = model(Xb)

model_wrapped = BoundedModule(model,global_input=(Xb,))
# Define perturbation. Here we add Linf perturbation to input data.
ptb = PerturbationLpNorm(norm=np.inf, eps=0.1)
# Make the input a BoundedTensor with the pre-defined perturbation.
my_input = BoundedTensor(Xb, ptb)
# Regular forward propagation using BoundedTensor works as usual.
prediction = model_wrapped(my_input)
# Compute LiRPA bounds using the backward mode bound propagation (CROWN).
lb, ub = model_wrapped.compute_bounds(x=(my_input,), method="backward",IBP=True)

Error


AssertionError Traceback (most recent call last) Cell In[35], line 43 41 prediction = model_wrapped(my_input) 42 # Compute LiRPA bounds using the backward mode bound propagation (CROWN). ---> 43 lb, ub = model_wrapped.compute_bounds(x=(my_input,), method="backward",IBP=True)

File ~\anaconda3\envs\survival\lib\site-packages\auto_lirpa-0.4.0-py3.10.egg\auto_LiRPA\bound_general.py:1206, in BoundedModule.compute_bounds(self, x, aux, C, method, IBP, forward, bound_lower, bound_upper, reuse_ibp, reuse_alpha, return_A, needed_A_dict, final_node_name, average_A, interm_bounds, reference_bounds, intermediate_constr, alpha_idx, aux_reference_bounds, need_A_only, cutter, decision_thresh, update_mask) 1202 elif bound_upper: 1203 return ret2 # ret2[0] is None. -> 1206 return self._compute_bounds_main(C=C, 1207 method=method, 1208 IBP=IBP, 1209 bound_lower=bound_lower, 1210 bound_upper=bound_upper, 1211 reuse_ibp=reuse_ibp, 1212 reuse_alpha=reuse_alpha, 1213 average_A=average_A, 1214 alpha_idx=alpha_idx, 1215 need_A_only=need_A_only, 1216 update_mask=update_mask)

File ~\anaconda3\envs\survival\lib\site-packages\auto_lirpa-0.4.0-py3.10.egg\auto_LiRPA\bound_general.py:1311, in BoundedModule._compute_bounds_main(self, C, method, IBP, bound_lower, bound_upper, reuse_ibp, reuse_alpha, average_A, alpha_idx, need_A_only, update_mask) 1306 apply_output_constraints_to = ( 1307 self.bound_opts['optimize_bound_args']['apply_output_constraints_to'] 1308 ) 1309 # This is for the final output bound. 1310 # No need to pass in intermediate layer beta constraints. -> 1311 ret = self.backward_general( 1312 final, C, 1313 bound_lower=bound_lower, bound_upper=bound_upper, 1314 average_A=average_A, need_A_only=need_A_only, 1315 unstable_idx=alpha_idx, update_mask=update_mask, 1316 apply_output_constraints_to=apply_output_constraints_to) 1317 # FIXME when C is specified, lower and upper should not be saved to 1318 # final.lower and final.upper, because they are not the bounds for 1319 # the node. 1320 final.lower, final.upper = ret[0], ret[1]

File ~\anaconda3\envs\survival\lib\site-packages\auto_lirpa-0.4.0-py3.10.egg\auto_LiRPA\backward_bound.py:256, in backward_general(self, bound_node, C, start_backpropagation_at_node, bound_lower, bound_upper, average_A, need_A_only, unstable_idx, update_mask, verbose, apply_output_constraints_to, initial_As, initial_lb, initial_ub) 254 else: 255 start_shape = None --> 256 A, lower_b, upper_b = l.bound_backward( 257 lA, uA, *l.inputs, 258 start_node=bound_node, unstable_idx=unstable_idx, 259 start_shape=start_shape) 261 # After propagation through this node, we delete its lA, uA variables. 262 if bound_node.name != self.final_name:

File ~\anaconda3\envs\survival\lib\site-packages\auto_lirpa-0.4.0-py3.10.egg\auto_LiRPA\operators\nonlinear.py:508, in BoundPow.bound_backward(self, last_lA, last_uA, x, y, start_node, start_shape, **kwargs) 506 x.upper = torch.max(x.upper, x.lower + 1e-8) 507 self.exponent = int(y) --> 508 assert self.exponent >= 2 509 if self.exponent % 2: 510 self.precompute_relaxation(self.act_func, self.d_act_func)

System configuration:

  • Windows 11 Home Version 22H2
  • Python 3.10.13
  • PyTorch 1.11.0+cu113
  • auto_LiRPA 0.4.0
  • Have you tried to reproduce the problem in a cleanly created conda/virtualenv environment using official installation instructions and the latest code on the main branch?: [Yes]

mlpotter avatar Nov 23 '23 13:11 mlpotter

Hi @mlpotter, do you intend to use a fixed exponent for torch.pow? I see self.logit_k is a single parameter.

shizhouxing avatar Nov 30 '23 20:11 shizhouxing

@shizhouxing the exponent should be a learnable parameter that changes during training. There are two specifications I would like for k: a scalar parameter, or a neural network.

mlpotter avatar Nov 30 '23 22:11 mlpotter

Accidentally closed. Sorry.

mlpotter avatar Nov 30 '23 22:11 mlpotter

So far we have only supported integer exponent with k>=2. Other cases would need an additional implementation on the linear relaxation.

shizhouxing avatar Dec 01 '23 23:12 shizhouxing