Shane Barratt

Results 22 comments of Shane Barratt

I'm sorry could you elaborate more and point to some specific code? I'm not sure I understand your comment.

In the original paper ( https://papers.nips.cc/paper/6125-improved-techniques-for-training-gans.pdf on page 4), it is ‘entropy(pyx, py)’, or the KL divergence between p(y|x) and p(y). Shane On Sun, Dec 22, 2019 at 12:33 AM...

I believe the difference exists because the network weights are different between torch and tensorflow On Mon, Nov 16, 2020 at 6:51 PM Shuan wrote: > @vibss2397 I still recommend...

This code is a few years outdated. You’re going to have to use an old version of tensor flow or code it from first principles. On Sat, Aug 19, 2023...

Yep, chatted with Akshay about this exact same idea. I like it.

MWE: ```python import cvxpy as cp import numpy as np import scipy.sparse import IPython as ipy m, n = 50, 10 b = np.random.randn(m) Avalue = scipy.sparse.random(m, n, density=.5, format='coo')...

Andrew, Yes you can. First run the Kalman smoother to get an estimate of x_t. Then your prediction, in fact, the conditional mean of y_{t+1},...,y_{t+10} given the past is y_{t+1}...

LGTM on an initial pass. Do you have any benchmarks for this?

Good question! The minus sign makes its way to this line: https://github.com/cvxgrp/diffcp/blob/83080bcd30775e2a48fbac33ca4165c474a7aa00/diffcp/cone_program.py#L352 Because otherwise it would be ``` values = -pi_z[cols] * r[rows + n] + pi_z[n + rows] *...

It looks like the paper is wrong but the code is right! We're in the process of updating the equations for dA, db, and dc. They should be ![image](https://user-images.githubusercontent.com/8305177/82474721-54f42800-9a80-11ea-976e-6c5ad1080190.png) Thanks...