Brian Patton

Results 56 comments of Brian Patton

The problem is that tensorflow offers both "tensorflow" and "tensorflow-gpu" packages, and we can work with either. I suppose might be open to a PR adding "tf" and "tf-gpu" or...

The reparameterize=True setting on MixtureSameFamily gives you an unbiased version of this using implicit differentiation. Are there settings where a soft mixture would be superior? On Thu, Aug 4, 2022,...

TFP now has a random package you could add a pair of `choice` and `stateless_choice` functions there.

For comments on the PR we'd ask you to make the updates. Also take a look at the linter outputs. We sometimes make minor tweaks when we pull things in.

Consider instead using Categorical(logits=[0, t[0]]), assuming you have no activation function applied to the incoming tensor. On Tue, Oct 20, 2020 at 8:54 AM Minas Karamanis wrote: > Hi @jeffpollock9...

BTW if anyone wants to send a PR to add some zero-inflated discrete distributions, sampling and log_prob should not be too complicated. There might even be a case for a...

No one has started, feel free to have a go at it. Brian Patton | Software Engineer | ***@***.*** On Mon, Jun 28, 2021 at 3:26 PM Simon Dirmeier ***@***.***>...

If you have a resnet model with weight, layer, or spectral normalization on the dense layers, an easy approach might be to put a random fourier features GP layer at...

I don't think there is currently an implementation. The following might be simpler: tf.reduce_sum(tf.range(self._num_categories(probs)) * probs, axis=-1) / tf.reduce_sum(probs, axis=-1) You could send a PR, sure. On Fri, May 5,...

We haven't yet settled on a "right" way to do NNs in JAX. For now, an approach you can use is passing kwargs to log_prob, something like below. If you...