Simple network with fipout layers does not work with mixed precision
Dear tfp devs
Firstly, thank you for this magnificent package!
I have tried using mixed precision the same way as with base tf(it works on tf). Unfortunately an error occurs:
Tensor conversion requested dtype float32 for Tensor with dtype float16: <tf.Tensor 'Cast:0' shape=(None, 300, 300, 1) dtype=float16>
Model consist of conv and dense flipout layers.
tfp: 0.12.1 tensorflow running on an official docker container
Regards, Jakub
Hi,
First of all I would like to thank the devs for this extensive Tensorflow Probability (TFP) library. And thank you @szperajacyzolw for opening this issue.
I also faced the similar issue when using mixed precision (mixed_bfloat16 or mixed_float16) for all three variational Bayesian Layers ['DenseFlipout', 'DenseLocalReparameterization', 'DenseReparameterization']. Below is the exact error and code to reproduce the error.
Error:
ValueError: Exception encountered when calling layer 'dense_flipout' (type DenseFlipout).
Tensor conversion requested dtype float32 for Tensor with dtype float16: <tf.Tensor 'sequential/dense_flipout/Cast:0' shape=(32, 8) dtype=float16>
Code to reproduce:
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow.keras.mixed_precision import set_global_policy, Policy
set_global_policy(Policy('mixed_float16'))
model = tf.keras.Sequential([
tfp.layers.DenseFlipout(512, activation=tf.nn.relu),
tfp.layers.DenseFlipout(10),
])
x = np.random.randn(32, 8).astype(np.float32)
y = np.random.randn(32, 10).astype(np.float32)
model.compile(optimizer='adam', loss='mse')
model.fit(x, y, epochs=1)
Is anyone working on this issue or working on mixed precision support for Tensorflow Probability (TFP) in general?