deep_uncertainty_estimation icon indicating copy to clipboard operation
deep_uncertainty_estimation copied to clipboard

About regression loss for steering prediction

Open godspeed1989 opened this issue 4 years ago • 2 comments

Thanks for your great work. In your paper: image To train DroNet for steering prediction, did you just use MSE for supervision?

Can you paste the part of code to calculate training loss and evaluation metrics? The release code just contains SoftmaxHeteroscedasticLoss for classification in CIFAR.

godspeed1989 avatar Nov 11 '20 08:11 godspeed1989

Hi @godspeed1989

I am glad to know that you appreciate our work!

The loss to train our network is literally torch.nn.functional.mse(outputs,targets), plus L2 regularization on model weights. To evaluate the network, we use RMSE, EVA and NLL.

EVA:

def explained_variance_1d(ypred,y):
    """
    Var[ypred - y] / var[y].
    https://www.quora.com/What-is-the-meaning-proportion-of-variance-explained-in-linear-regression
    """
    assert y.ndim == 1 and ypred.ndim == 1
    vary = np.var(y)
    return np.nan if vary==0 else 1 - np.var(y-ypred)/vary

def compute_explained_variance(predictions, real_values):
    """
    Computes the explained variance of prediction for each
    steering and the average of them
    """
    assert np.all(predictions.shape == real_values.shape)
    ex_variance = explained_variance_1d(predictions, real_values)
    print("EVA = {}".format(ex_variance))
    return ex_variance

RMSE:

def compute_rmse(predictions, real_values):
    assert np.all(predictions.shape == real_values.shape)
    mse = np.mean(np.square(predictions - real_values))
    rmse = np.sqrt(mse)
    print("RMSE = {}".format(rmse))
    return rmse

Log-Likelihood:

def log_likelihood(y_pred, y_true, sigma):
    y_true = torch.Tensor(y_true)
    y_pred= torch.Tensor(y_pred)
    sigma = torch.Tensor(sigma)
    
    dist = torch.distributions.normal.Normal(loc=y_pred, scale=sigma)
    ll = torch.mean(dist.log_prob(y_true))
    ll = np.asscalar(ll.numpy())
    return ll

I hope these functions can be helpful! Cheers

mattiasegu avatar Nov 21 '20 10:11 mattiasegu

Hi @mattiasegu Thanks for your reply. One more question ;) I am still confusing about how we can estimate aleatoric uncertainty (i.e., output var) without a specific supervision. The output var is used in SoftmaxHeteroscedasticLoss.

In my mind, the steering prediction is a regression problem. From ADF's original paper Lightweight Probabilistic Deep Networks, there is a probabilistic analog for regression by minimizing negative log likelihood: image So, why can't we add this as a part of the learning target?

godspeed1989 avatar Nov 23 '20 09:11 godspeed1989