deepbrain
deepbrain copied to clipboard
Error: tensorflow has no attribute Session
Hi, I've installed your library and both tried to use it directly from python and as command line. I get the following error in both situations. I have tensorflow installed... Do you have any idea about what can be causing this ?
the same thing happened to me. I read somewhere it was written in TensorFlow 1.xx version. But I was using TensorFlow 2.Even though I changed to TensorFlow 1.15 it still gives me the same error. I don't know which TensorFlow version it supports exactly.
If anyone solves the issue please let us know.
I have the same problem. Have you already solved this issue?
No, I haven't ...
Hey! I had the same issue and personally I have been able to execute the program by downgrading my Tensorflow to a 1.X version, which this program appears to be built on. tf.Session must be deprecated in TF2.X for some reason.
I used pip uninstall tensorflow-gpu -y pip install tensorflow-gpu==1.13.1
Thank you! It was really helpful!
Hey. I am having the same problem, however, downgrading TF is not an option, as I am running Ubuntu 20.04 and the base python 3 is Python 3.8.2. TF 1.x is only supported up until Python 3.7.
Can the package maybe be updated? Thank you!
I forked this repo and made some updates to make it compatible with tf2. Feel free to try it out here. I haven't tested it thoroughly so let me know if it works for you - no guarantees.
If you are using Google Colab, put the line %tensorflow_version 1.13
before you do !pip3 install deepbrain
. Make sure you restart the runtime after this
I am using python 3.7.9 and it is fixed installing these packages in this order:
pip install numpy==1.16
pip install tensorflow==1.13.1
pip install tensorflow-gpu==1.13.1
pip install deepbrain
It works for me (Maybe 2 tf installs could be replaced for only one)
I forked this repo and made some updates to make it compatible with tf2. Feel free to try it out here. I haven't tested it thoroughly so let me know if it works for you - no guarantees.
it doesn't really work. Is there a way to overcome this issue? Thanks in advance
One time consuming way to do this is loading both the v1 and v2 architecture and transferring the weights to the v2 architecture. Below is the code to do this. 'graph_v2.pb' can be found in the models directory. Might be missing some imports due to it being a part of my code.
import tensorflow as tf
import numpy as np
path = 'graph_v2.pb'
graph_def = tf.compat.v1.GraphDef()
loaded = graph_def.ParseFromString(open(path, 'rb').read())
init = tf.keras.initializers.GlorotNormal()
class extractor_v2(tf.keras.Model):
def __init__(self):
super(extractor_v2, self).__init__(name='')
self.conv3_a = tf.keras.layers.Conv3D(16, 3, activation=tf.nn.relu, kernel_initializer=init, padding="same", name='conv3_a')
self.conv3_b = tf.keras.layers.Conv3D(16, 3, activation=tf.nn.relu, kernel_initializer=init, padding="same", name='conv3_b')
self.conv3_c = tf.keras.layers.Conv3D(32, 3, activation=tf.nn.relu, kernel_initializer=init, padding="same", name='conv3_c')
self.conv3_d = tf.keras.layers.Conv3D(32, 3, activation=tf.nn.relu, kernel_initializer=init, padding="same", name='conv3_d')
self.conv3_e = tf.keras.layers.Conv3D(64, 3, activation=tf.nn.relu, kernel_initializer=init, padding="same", name='conv3_e')
self.conv3_f = tf.keras.layers.Conv3D(64, 3, activation=tf.nn.relu, kernel_initializer=init, padding="same", name='conv3_f')
self.conv3_g = tf.keras.layers.Conv3D(64, 3, activation=tf.nn.relu, kernel_initializer=init, padding="same", name='conv3_g')
self.conv3_h = tf.keras.layers.Conv3D(32, 3, activation=tf.nn.relu, kernel_initializer=init, padding="same", name='conv3_h')
self.conv3_i = tf.keras.layers.Conv3D(16, 3, activation=tf.nn.relu, kernel_initializer=init, padding="same", name='conv3_i')
self.conv3_j = tf.keras.layers.Conv3D(1, 1, kernel_initializer=init, padding="same", name='conv3_j')
self.conv3_trans_a = tf.keras.layers.Conv3DTranspose(64, 3, strides=2, kernel_initializer=init, padding="same", use_bias=False, name='conv3_transpose_a')
self.conv3_trans_b = tf.keras.layers.Conv3DTranspose(32, 3, strides=2, kernel_initializer=init, padding="same", use_bias=False, name='conv3_transpose_b')
self.conv3_trans_c = tf.keras.layers.Conv3DTranspose(16, 3, strides=2, kernel_initializer=init, padding="same", use_bias=False, name='conv3_transpose_c')
self.maxpool_a = tf.keras.layers.MaxPool3D(strides=(2, 2, 2))
self.maxpool_b = tf.keras.layers.MaxPool3D(strides=(2, 2, 2))
self.maxpool_c = tf.keras.layers.MaxPool3D(strides=(2, 2, 2))
self.dropout_a = tf.keras.layers.Dropout(0.3)
self.dropout_b = tf.keras.layers.Dropout(0.3)
self.dropout_c = tf.keras.layers.Dropout(0.3)
self.dropout_d = tf.keras.layers.Dropout(0.3)
self.dropout_e = tf.keras.layers.Dropout(0.3)
self.dropout_f = tf.keras.layers.Dropout(0.3)
self.concat = tf.keras.layers.Concatenate()
self.sigmoid = tf.keras.layers.Activation(tf.nn.sigmoid)
def call(self, input_tensor, training=False):
x = self.conv3_a(input_tensor)
conv1 = self.conv3_b(x)
x = self.maxpool_a(conv1)
x = self.dropout_a(x)
x = self.conv3_c(x)
conv2 = self.conv3_d(x)
x = self.maxpool_b(conv2)
x = self.dropout_b(x)
x = self.conv3_e(x)
conv3 = self.conv3_f(x)
x = self.maxpool_c(conv3)
x = self.dropout_c(x)
x = self.conv3_trans_a(x)
x = self.concat((x, conv3))
x = self.conv3_g(x)
x = self.dropout_d(x)
x = self.conv3_trans_b(x)
x = self.concat((x, conv2))
x = self.conv3_h(x)
x = self.dropout_e(x)
x = self.conv3_trans_c(x)
x = self.concat((x, conv1))
x = self.conv3_i(x)
x = self.dropout_e(x)
x = self.conv3_j(x)
output = self.sigmoid(x)
return output
def return_weights(graph_def, layer_list):
def _imports_graph_def():
tf.compat.v1.import_graph_def(graph_def, name="")
ret_list = []
wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])
import_graph = wrapped_import.graph
for layer_name in layer_list:
ret_list.append(tf.make_ndarray(tf.nest.map_structure(import_graph.as_graph_element, str(layer_name) + '/kernel').get_attr('value')))
if layer_name.startswith('conv3d_transpose') == False:
ret_list.append(tf.make_ndarray(tf.nest.map_structure(import_graph.as_graph_element, str(layer_name) + '/bias').get_attr('value')))
return ret_list
layer_list = ['conv3d', 'conv3d_1', 'conv3d_2', 'conv3d_3', 'conv3d_4', 'conv3d_5', 'conv3d_transpose', 'conv3d_6', 'conv3d_transpose_1', 'conv3d_7', 'conv3d_transpose_2', 'conv3d_8', 'conv3d_9']
weight_list = return_weights(
graph_def, layer_list
)
extractor = extractor_v2()
extractor.build(input_shape=(None, 128, 128, 128, 1))
extractor.get_layer('conv3_a').set_weights(weight_list[:2])
extractor.get_layer('conv3_b').set_weights(weight_list[2:4])
extractor.get_layer('conv3_c').set_weights(weight_list[4:6])
extractor.get_layer('conv3_d').set_weights(weight_list[6:8])
extractor.get_layer('conv3_e').set_weights(weight_list[8:10])
extractor.get_layer('conv3_f').set_weights(weight_list[10:12])
extractor.get_layer('conv3_transpose_a').set_weights([weight_list[12]])
extractor.get_layer('conv3_g').set_weights(weight_list[13:15])
extractor.get_layer('conv3_transpose_b').set_weights([weight_list[15]])
extractor.get_layer('conv3_h').set_weights(weight_list[16:18])
extractor.get_layer('conv3_transpose_c').set_weights([weight_list[18]])
extractor.get_layer('conv3_i').set_weights(weight_list[19:21])
extractor.get_layer('conv3_j').set_weights(weight_list[21:23])
Tested on tensorflow 2.x cpu version and it works. Didn't try gpu though.
You can save the model if you don't want to go through all this every time.